{"title":"Feature Reconstruction Guided Fusion Network for Hyperspectral and LiDAR Classification","authors":"Zhi Li;Ke Zheng;Lianru Gao;Nannan Zi;Chengrui Li","doi":"10.1109/TGRS.2025.3562246","DOIUrl":null,"url":null,"abstract":"Deep learning has become increasingly popular in hyperspectral image (HSI) and light detection and ranging (LiDAR) data classification, thanks to its powerful feature learning and representation capabilities. However, HSI often contains substantial redundant information, which can hinder efficient data utilization. Furthermore, the significant disparity in information content between HSI and LiDAR data poses a major challenge in representing and aligning semantic information across these two modalities. To address these challenges, we propose a fusion network structure guided by feature reconstruction embedding (FRE). This approach employs feature decomposition to reconstruct HSI features and incorporates weight embedding to seamlessly integrate the reconstructed information into classification features. Furthermore, we introduce a cross-modal attention fusion module designed to merge extracted HSI and LiDAR features. This module fully exploits the complementary nature of these two type of feature, facilitating effective information exchange and semantic alignment across multimodal data. We evaluated our method on three widely used HSI and LiDAR datasets: Houston 2013, Augsburg, and MUUFL. Experimental results demonstrate that our proposed FRGFNet significantly outperforms traditional probabilistic methods and state-of-the-art deep learning networks, showcasing its effectiveness in multisource data fusion.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-14"},"PeriodicalIF":8.6000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10970021/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning has become increasingly popular in hyperspectral image (HSI) and light detection and ranging (LiDAR) data classification, thanks to its powerful feature learning and representation capabilities. However, HSI often contains substantial redundant information, which can hinder efficient data utilization. Furthermore, the significant disparity in information content between HSI and LiDAR data poses a major challenge in representing and aligning semantic information across these two modalities. To address these challenges, we propose a fusion network structure guided by feature reconstruction embedding (FRE). This approach employs feature decomposition to reconstruct HSI features and incorporates weight embedding to seamlessly integrate the reconstructed information into classification features. Furthermore, we introduce a cross-modal attention fusion module designed to merge extracted HSI and LiDAR features. This module fully exploits the complementary nature of these two type of feature, facilitating effective information exchange and semantic alignment across multimodal data. We evaluated our method on three widely used HSI and LiDAR datasets: Houston 2013, Augsburg, and MUUFL. Experimental results demonstrate that our proposed FRGFNet significantly outperforms traditional probabilistic methods and state-of-the-art deep learning networks, showcasing its effectiveness in multisource data fusion.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.