Benefiting from the booming deep learning techniques, spatial-spectral fusion (SSF) is considered as an ideal alternative to break the traditions of acquiring hyperspectral images (HSI) with costly devices. Yet with the remarkable progress, current solutions necessitate training and storing multiple models for different scaling factors. To overcome this dilemma, we propose a spatial-spectral fusion neural operator (SFNO) to perform arbitrary-scale SSF within the operator learning framework. Specifically, SFNO approaches the problem from the perspective of approximation theory by embedding the features of two degraded functions into a high-dimensional latent space through pointwise convolution layers, thereby capturing richer spectral feature information. Consequently, the mapping between function spaces is approximated via the Galerkin integral (GI) mechanism, which culminates in a final dimensionality reduction step to produce a high-resolution HSI. Moreover, we propose a progressive resampling integration (PR) that resamples the integrand’s domain in the triple kernel integration to provide non-local multi-scale information. The synergistic action of both integration mechanisms enables SFNO to effortlessly handle magnification factors it never encountered during training. Extensive experiments on the CAVE, Chikusei, Pavia Centre, Harvard, and real-world datasets demonstrate that our SFNO delivers substantial improvements over existing state-of-the-art methods. In particular, under the 8× upsampling setting on the CAVE, Chikusei, and Pavia Centre datasets, SFNO surpasses the second-best model by 0.56 dB, 1.05 dB, and 0.72 dB in PSNR, respectively. Our code is publicly available at https://github.com/weili419/SFNO.
扫码关注我们
求助内容:
应助结果提醒方式:
