用于高光谱图像分类的双分支全局空间-光谱融合变换器网络

Erxin Xie, Na Chen, Genwei Zhang, Jiangtao Peng, Weiwei Sun
{"title":"用于高光谱图像分类的双分支全局空间-光谱融合变换器网络","authors":"Erxin Xie, Na Chen, Genwei Zhang, Jiangtao Peng, Weiwei Sun","doi":"10.1111/phor.12491","DOIUrl":null,"url":null,"abstract":"Transformer has achieved outstanding performance in hyperspectral image classification (HSIC) thanks to its effectiveness in modelling the long‐term dependence relation. However, most of the existing algorithms combine convolution with transformer and use convolution for spatial–spectral information fusion, which cannot adequately learn the spatial–spectral fusion features of hyperspectral images (HSIs). To mine the rich spatial and spectral features, a two‐branch global spatial–spectral fusion transformer (GSSFT) model is designed in this paper, in which a spatial–spectral information fusion (SSIF) module is designed to fuse features of spectral and spatial branches. For the spatial branch, the local multiscale swin transformer (LMST) module is devised to obtain local–global spatial information of the samples and the background filtering (BF) module is constructed to weaken the weights of irrelevant pixels. The information learned from the spatial branch and the spectral branch is effectively fused to get final classification results. Extensive experiments are conducted on three HSI datasets, and the results of experiments show that the designed GSSFT method performs well compared with the traditional convolutional neural network and transformer‐based methods.","PeriodicalId":22881,"journal":{"name":"The Photogrammetric Record","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Two‐branch global spatial–spectral fusion transformer network for hyperspectral image classification\",\"authors\":\"Erxin Xie, Na Chen, Genwei Zhang, Jiangtao Peng, Weiwei Sun\",\"doi\":\"10.1111/phor.12491\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transformer has achieved outstanding performance in hyperspectral image classification (HSIC) thanks to its effectiveness in modelling the long‐term dependence relation. However, most of the existing algorithms combine convolution with transformer and use convolution for spatial–spectral information fusion, which cannot adequately learn the spatial–spectral fusion features of hyperspectral images (HSIs). To mine the rich spatial and spectral features, a two‐branch global spatial–spectral fusion transformer (GSSFT) model is designed in this paper, in which a spatial–spectral information fusion (SSIF) module is designed to fuse features of spectral and spatial branches. For the spatial branch, the local multiscale swin transformer (LMST) module is devised to obtain local–global spatial information of the samples and the background filtering (BF) module is constructed to weaken the weights of irrelevant pixels. The information learned from the spatial branch and the spectral branch is effectively fused to get final classification results. Extensive experiments are conducted on three HSI datasets, and the results of experiments show that the designed GSSFT method performs well compared with the traditional convolutional neural network and transformer‐based methods.\",\"PeriodicalId\":22881,\"journal\":{\"name\":\"The Photogrammetric Record\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Photogrammetric Record\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1111/phor.12491\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Photogrammetric Record","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/phor.12491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

变换器能有效模拟长期依赖关系,因此在高光谱图像分类(HSIC)中表现出色。然而,现有算法大多将卷积与变换器相结合,利用卷积进行空间-光谱信息融合,无法充分学习高光谱图像(HSI)的空间-光谱融合特征。为了挖掘丰富的空间和光谱特征,本文设计了一种双分支全局空间-光谱融合变换器(GSSFT)模型,其中设计了一个空间-光谱信息融合(SSIF)模块来融合光谱分支和空间分支的特征。对于空间分支,设计了局部多尺度斯温变换器(LMST)模块来获取样本的局部-全局空间信息,并构建了背景过滤(BF)模块来削弱无关像素的权重。从空间分支和光谱分支获得的信息被有效融合,从而得到最终的分类结果。实验结果表明,与传统的卷积神经网络和基于变换器的方法相比,所设计的 GSSFT 方法性能良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Two‐branch global spatial–spectral fusion transformer network for hyperspectral image classification
Transformer has achieved outstanding performance in hyperspectral image classification (HSIC) thanks to its effectiveness in modelling the long‐term dependence relation. However, most of the existing algorithms combine convolution with transformer and use convolution for spatial–spectral information fusion, which cannot adequately learn the spatial–spectral fusion features of hyperspectral images (HSIs). To mine the rich spatial and spectral features, a two‐branch global spatial–spectral fusion transformer (GSSFT) model is designed in this paper, in which a spatial–spectral information fusion (SSIF) module is designed to fuse features of spectral and spatial branches. For the spatial branch, the local multiscale swin transformer (LMST) module is devised to obtain local–global spatial information of the samples and the background filtering (BF) module is constructed to weaken the weights of irrelevant pixels. The information learned from the spatial branch and the spectral branch is effectively fused to get final classification results. Extensive experiments are conducted on three HSI datasets, and the results of experiments show that the designed GSSFT method performs well compared with the traditional convolutional neural network and transformer‐based methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
59th Photogrammetric Week: Advancement in photogrammetry, remote sensing and Geoinformatics Obituary for Prof. Dr.‐Ing. Dr. h.c. mult. Gottfried Konecny Topographic mapping from space dedicated to Dr. Karsten Jacobsen’s 80th birthday Frontispiece: Comparison of 3D models with texture before and after restoration ISPRS TC IV Mid‐Term Symposium: Spatial information to empower the Metaverse
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1