Qian Hu;Xinya Wang;Junjun Jiang;Xiao-Ping Zhang;Jiayi Ma
{"title":"Exploring the Spectral Prior for Hyperspectral Image Super-Resolution","authors":"Qian Hu;Xinya Wang;Junjun Jiang;Xiao-Ping Zhang;Jiayi Ma","doi":"10.1109/TIP.2024.3460470","DOIUrl":null,"url":null,"abstract":"In recent years, many single hyperspectral image super-resolution methods have emerged to enhance the spatial resolution of hyperspectral images without hardware modification. However, existing methods typically face two significant challenges. First, they struggle to handle the high-dimensional nature of hyperspectral data, which often results in high computational complexity and inefficient information utilization. Second, they have not fully leveraged the abundant spectral information in hyperspectral images. To address these challenges, we propose a novel hyperspectral super-resolution network named SNLSR, which transfers the super-resolution problem into the abundance domain. Our SNLSR leverages a spatial preserve decomposition network to estimate the abundance representations of the input hyperspectral image. Notably, the network acknowledges and utilizes the commonly overlooked spatial correlations of hyperspectral images, leading to better reconstruction performance. Then, the estimated low-resolution abundance is super-resolved through a spatial spectral attention network, where the informative features from both spatial and spectral domains are fully exploited. Considering that the hyperspectral image is highly spectrally correlated, we customize a spectral-wise non-local attention module to mine similar pixels along spectral dimension for high-frequency detail recovery. Extensive experiments demonstrate the superiority of our method over other state-of-the-art methods both visually and metrically. Our code is publicly available at \n<uri>https://github.com/HuQ1an/SNLSR</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5260-5272"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10684390/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, many single hyperspectral image super-resolution methods have emerged to enhance the spatial resolution of hyperspectral images without hardware modification. However, existing methods typically face two significant challenges. First, they struggle to handle the high-dimensional nature of hyperspectral data, which often results in high computational complexity and inefficient information utilization. Second, they have not fully leveraged the abundant spectral information in hyperspectral images. To address these challenges, we propose a novel hyperspectral super-resolution network named SNLSR, which transfers the super-resolution problem into the abundance domain. Our SNLSR leverages a spatial preserve decomposition network to estimate the abundance representations of the input hyperspectral image. Notably, the network acknowledges and utilizes the commonly overlooked spatial correlations of hyperspectral images, leading to better reconstruction performance. Then, the estimated low-resolution abundance is super-resolved through a spatial spectral attention network, where the informative features from both spatial and spectral domains are fully exploited. Considering that the hyperspectral image is highly spectrally correlated, we customize a spectral-wise non-local attention module to mine similar pixels along spectral dimension for high-frequency detail recovery. Extensive experiments demonstrate the superiority of our method over other state-of-the-art methods both visually and metrically. Our code is publicly available at
https://github.com/HuQ1an/SNLSR
.