Low-Rank Transformer for High-Resolution Hyperspectral Computational Imaging

IF 11.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Computer Vision Pub Date : 2024-08-20 DOI:10.1007/s11263-024-02203-7
Yuanye Liu, Renwei Dian, Shutao Li
{"title":"Low-Rank Transformer for High-Resolution Hyperspectral Computational Imaging","authors":"Yuanye Liu, Renwei Dian, Shutao Li","doi":"10.1007/s11263-024-02203-7","DOIUrl":null,"url":null,"abstract":"<p>Spatial-spectral fusion aims to obtain high-resolution hyperspectral image (HR-HSI) by fusing low-resolution hyperspectral image (LR-HSI) and high-resolution multispectral image (MSI). Recently, many convolutional neural network (CNN)-based methods have achieved excellent results. However, these methods only consider local contextual information, which limits the fusion performance. Although some Transformer-based methods overcome this problem, they ignore some intrinsic characteristics of HR-HSI, such as spatial low-rank characteristics, resulting in large parameters and high computational cost. To address this problem, we propose a low-rank Transformer network (LRTN) for spatial-spectral fusion. LRTN can make full use of the spatial prior of MSI and the spectral prior of LR-HSI, thereby achieving outstanding fusion performance. Specifically, in the feature extraction stage, we utilize the cross-attention mechanism to force the model to focus on spatial information that is not available in LR-HSI and spectral information that is not available in MSI. In the feature fusion stage, we carefully design a self-attention mechanism guided by spatial and spectral priors to improve spatial and spectral fidelity. Moreover, we present a novel spatial low-rank cross-attention module, which can better capture global spatial information compared to other Transformer structures. In this module, we combine the matrix factorization theorem to fully exploit the spatial low-rank characteristics of HSI, which reduces parameters and computational cost while ensuring fusion quality. Experiments on several datasets demonstrate that our method outperforms the current state-of-the-art spatial-spectral fusion methods.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"144 1","pages":""},"PeriodicalIF":11.6000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Vision","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11263-024-02203-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Spatial-spectral fusion aims to obtain high-resolution hyperspectral image (HR-HSI) by fusing low-resolution hyperspectral image (LR-HSI) and high-resolution multispectral image (MSI). Recently, many convolutional neural network (CNN)-based methods have achieved excellent results. However, these methods only consider local contextual information, which limits the fusion performance. Although some Transformer-based methods overcome this problem, they ignore some intrinsic characteristics of HR-HSI, such as spatial low-rank characteristics, resulting in large parameters and high computational cost. To address this problem, we propose a low-rank Transformer network (LRTN) for spatial-spectral fusion. LRTN can make full use of the spatial prior of MSI and the spectral prior of LR-HSI, thereby achieving outstanding fusion performance. Specifically, in the feature extraction stage, we utilize the cross-attention mechanism to force the model to focus on spatial information that is not available in LR-HSI and spectral information that is not available in MSI. In the feature fusion stage, we carefully design a self-attention mechanism guided by spatial and spectral priors to improve spatial and spectral fidelity. Moreover, we present a novel spatial low-rank cross-attention module, which can better capture global spatial information compared to other Transformer structures. In this module, we combine the matrix factorization theorem to fully exploit the spatial low-rank characteristics of HSI, which reduces parameters and computational cost while ensuring fusion quality. Experiments on several datasets demonstrate that our method outperforms the current state-of-the-art spatial-spectral fusion methods.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于高分辨率高光谱计算成像技术的低阶变换器
空间光谱融合旨在通过融合低分辨率高光谱图像(LR-HSI)和高分辨率多光谱图像(MSI)来获得高分辨率高光谱图像(HR-HSI)。最近,许多基于卷积神经网络(CNN)的方法取得了卓越的成果。然而,这些方法只考虑了局部上下文信息,从而限制了融合性能。虽然一些基于变换器的方法克服了这一问题,但它们忽略了 HR-HSI 的一些固有特征,如空间低秩特征,导致参数过大,计算成本过高。针对这一问题,我们提出了一种用于空间光谱融合的低阶变换器网络(LRTN)。LRTN 可以充分利用 MSI 的空间先验和 LR-HSI 的光谱先验,从而实现出色的融合性能。具体来说,在特征提取阶段,我们利用交叉关注机制迫使模型关注 LR-HSI 中不具备的空间信息和 MSI 中不具备的光谱信息。在特征融合阶段,我们精心设计了一种以空间和光谱先验为指导的自注意机制,以提高空间和光谱的保真度。此外,我们还提出了一种新颖的空间低阶交叉注意模块,与其他 Transformer 结构相比,它能更好地捕捉全局空间信息。在该模块中,我们结合矩阵因式分解定理,充分利用了 HSI 的空间低秩特性,从而在确保融合质量的同时降低了参数和计算成本。在多个数据集上的实验证明,我们的方法优于目前最先进的空间-光谱融合方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Computer Vision
International Journal of Computer Vision 工程技术-计算机:人工智能
CiteScore
29.80
自引率
2.10%
发文量
163
审稿时长
6 months
期刊介绍: The International Journal of Computer Vision (IJCV) serves as a platform for sharing new research findings in the rapidly growing field of computer vision. It publishes 12 issues annually and presents high-quality, original contributions to the science and engineering of computer vision. The journal encompasses various types of articles to cater to different research outputs. Regular articles, which span up to 25 journal pages, focus on significant technical advancements that are of broad interest to the field. These articles showcase substantial progress in computer vision. Short articles, limited to 10 pages, offer a swift publication path for novel research outcomes. They provide a quicker means for sharing new findings with the computer vision community. Survey articles, comprising up to 30 pages, offer critical evaluations of the current state of the art in computer vision or offer tutorial presentations of relevant topics. These articles provide comprehensive and insightful overviews of specific subject areas. In addition to technical articles, the journal also includes book reviews, position papers, and editorials by prominent scientific figures. These contributions serve to complement the technical content and provide valuable perspectives. The journal encourages authors to include supplementary material online, such as images, video sequences, data sets, and software. This additional material enhances the understanding and reproducibility of the published research. Overall, the International Journal of Computer Vision is a comprehensive publication that caters to researchers in this rapidly growing field. It covers a range of article types, offers additional online resources, and facilitates the dissemination of impactful research.
期刊最新文献
Noise-Resistant Multimodal Transformer for Emotion Recognition Polynomial Implicit Neural Framework for Promoting Shape Awareness in Generative Models Deep Attention Learning for Pre-operative Lymph Node Metastasis Prediction in Pancreatic Cancer via Multi-object Relationship Modeling Learning Discriminative Features for Visual Tracking via Scenario Decoupling Hard-Normal Example-Aware Template Mutual Matching for Industrial Anomaly Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1