SSAF-Net: A Spatial-Spectral Adaptive Fusion Network for Hyperspectral Unmixing With Endmember Variability

IF 8.6 1区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Geoscience and Remote Sensing Pub Date : 2025-02-20 DOI:10.1109/TGRS.2025.3544037
Wei Gao;Jingyu Yang;Yu Zhang;Youssef Akoudad;Jie Chen
{"title":"SSAF-Net: A Spatial-Spectral Adaptive Fusion Network for Hyperspectral Unmixing With Endmember Variability","authors":"Wei Gao;Jingyu Yang;Yu Zhang;Youssef Akoudad;Jie Chen","doi":"10.1109/TGRS.2025.3544037","DOIUrl":null,"url":null,"abstract":"Deep learning (DL) has recently garnered substantial interest in hyperspectral unmixing (HU) due to its exceptional learning capabilities. In particular, unsupervised unmixing methods based on autoencoders have become a research hotspot, with many existing networks focusing on the fusion of spatial and spectral information. However, the diversity of fusion structures makes it challenging to select appropriate modules that meet unmixing requirements, while the issue of endmember variability is often neglected. In this article, we propose a novel spatial-spectral adaptive fusion network (SSAF-Net) that accounts for endmember variability. The network consists of two cascaded encoders and a deep generative model (DGM) based on a variational autoencoder (VAE). The encoders perform local spatial-spectral information fusion through channel and spatial attention mechanisms, respectively, while self-perception loss facilitates global information fusion during the cascading process. In addition, we address endmember variability using a proportional perturbation model (PPM), learning the necessary endmember parameters through an elaborately designed DGM. Our SSAF-Net learns both endmember variability and the corresponding abundances in an unsupervised manner. Experimental results on a synthetic dataset and real-world datasets validate the significant superiority of SSAF-Net compared to other methods. The code for this work is available at <uri>https://github.com/yjysimply/SSAF-Net</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":8.6000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10896744/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning (DL) has recently garnered substantial interest in hyperspectral unmixing (HU) due to its exceptional learning capabilities. In particular, unsupervised unmixing methods based on autoencoders have become a research hotspot, with many existing networks focusing on the fusion of spatial and spectral information. However, the diversity of fusion structures makes it challenging to select appropriate modules that meet unmixing requirements, while the issue of endmember variability is often neglected. In this article, we propose a novel spatial-spectral adaptive fusion network (SSAF-Net) that accounts for endmember variability. The network consists of two cascaded encoders and a deep generative model (DGM) based on a variational autoencoder (VAE). The encoders perform local spatial-spectral information fusion through channel and spatial attention mechanisms, respectively, while self-perception loss facilitates global information fusion during the cascading process. In addition, we address endmember variability using a proportional perturbation model (PPM), learning the necessary endmember parameters through an elaborately designed DGM. Our SSAF-Net learns both endmember variability and the corresponding abundances in an unsupervised manner. Experimental results on a synthetic dataset and real-world datasets validate the significant superiority of SSAF-Net compared to other methods. The code for this work is available at https://github.com/yjysimply/SSAF-Net.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SSAF-Net:基于端元变异性的高光谱解混空间光谱自适应融合网络
深度学习(DL)由于其卓越的学习能力,最近在高光谱分解(HU)领域引起了极大的兴趣。特别是基于自编码器的无监督解混方法已成为研究热点,现有的许多网络都侧重于空间信息和光谱信息的融合。然而,融合结构的多样性使得选择满足解混要求的合适模块变得困难,而端元变异性问题往往被忽视。在本文中,我们提出了一种新的考虑端元变异的空间-光谱自适应融合网络(SSAF-Net)。该网络由两个级联编码器和基于变分自编码器(VAE)的深度生成模型(DGM)组成。在级联过程中,编码器分别通过通道和空间注意机制进行局部空间-频谱信息融合,而自我感知损失则促进了全局信息融合。此外,我们使用比例摄动模型(PPM)解决端元可变性问题,通过精心设计的DGM学习必要的端元参数。我们的SSAF-Net以无监督的方式学习端元变异性和相应的丰度。在合成数据集和实际数据集上的实验结果验证了SSAF-Net与其他方法相比的显著优势。这项工作的代码可在https://github.com/yjysimply/SSAF-Net上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Geoscience and Remote Sensing
IEEE Transactions on Geoscience and Remote Sensing 工程技术-地球化学与地球物理
CiteScore
11.50
自引率
28.00%
发文量
1912
审稿时长
4.0 months
期刊介绍: IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.
期刊最新文献
Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring Wavelet Query multi-head attention Generative Adversarial Network for remote sensing image super-resolution reconstruction Extraction of Spectral Polarimetric Features with Weather Radar and Its Application in Vertical Wind Shear Identification Towards Robust Urban Region Representation Learning through Hierarchical Modeling for Modifiable Areal Unit Problem Mitigation Topology-Guided Boundary-Aware Feature Learning for Hyperspectral Image Classification with Dual Graph Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1