Haizhu Pan , Yuexia Zhu , Haimiao Ge , Moqi Liu , Cuiping Shi
{"title":"用于高光谱图像分类的多尺度交叉融合网络","authors":"Haizhu Pan , Yuexia Zhu , Haimiao Ge , Moqi Liu , Cuiping Shi","doi":"10.1016/j.ejrs.2023.09.002","DOIUrl":null,"url":null,"abstract":"<div><p>Recently, hyperspectral image (HSI) classification methods based on deep-learning have attracted widespread attention. Convolutional neural networks, as a crucial deep-learning technique, have exhibited outstanding performance in HSI classification. However, there are still some challenges, such as limited labeled samples, and feature extraction of complex land cover objects. To address these challenges, in this paper, we propose a multiscale cross-fusion network for HSI classification. It consists of three components: a spectral signatures extraction network, a spatial features extraction network and a classification network, which are utilized to extract spectral signatures, extract spatial contextual information and generate classification results, respectively. Specifically, the cross-branch multiscale convolutional block and the channel global contextual attention are integrated to extract spectral signatures, and the cross-hierarchy multiscale convolutional blocks and the spatial global contextual attention are combined to extract spatial features. Furthermore, special fusion strategies are proposed in these blocks to promote the interaction between features and achieve better feature connectivity. A series of experiments are conducted on three public HCI datasets, and the results show that the overall accuracy of the proposed network is 0.57%, 0.61%, and 0.3% higher than that of the state-of-the-art method on the PU, SV, and HH datasets, respectively.</p></div>","PeriodicalId":48539,"journal":{"name":"Egyptian Journal of Remote Sensing and Space Sciences","volume":"26 3","pages":"Pages 839-850"},"PeriodicalIF":3.7000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multiscale cross-fusion network for hyperspectral image classification\",\"authors\":\"Haizhu Pan , Yuexia Zhu , Haimiao Ge , Moqi Liu , Cuiping Shi\",\"doi\":\"10.1016/j.ejrs.2023.09.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recently, hyperspectral image (HSI) classification methods based on deep-learning have attracted widespread attention. Convolutional neural networks, as a crucial deep-learning technique, have exhibited outstanding performance in HSI classification. However, there are still some challenges, such as limited labeled samples, and feature extraction of complex land cover objects. To address these challenges, in this paper, we propose a multiscale cross-fusion network for HSI classification. It consists of three components: a spectral signatures extraction network, a spatial features extraction network and a classification network, which are utilized to extract spectral signatures, extract spatial contextual information and generate classification results, respectively. Specifically, the cross-branch multiscale convolutional block and the channel global contextual attention are integrated to extract spectral signatures, and the cross-hierarchy multiscale convolutional blocks and the spatial global contextual attention are combined to extract spatial features. Furthermore, special fusion strategies are proposed in these blocks to promote the interaction between features and achieve better feature connectivity. A series of experiments are conducted on three public HCI datasets, and the results show that the overall accuracy of the proposed network is 0.57%, 0.61%, and 0.3% higher than that of the state-of-the-art method on the PU, SV, and HH datasets, respectively.</p></div>\",\"PeriodicalId\":48539,\"journal\":{\"name\":\"Egyptian Journal of Remote Sensing and Space Sciences\",\"volume\":\"26 3\",\"pages\":\"Pages 839-850\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2023-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Egyptian Journal of Remote Sensing and Space Sciences\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1110982323000728\",\"RegionNum\":3,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Egyptian Journal of Remote Sensing and Space Sciences","FirstCategoryId":"89","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110982323000728","RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
Multiscale cross-fusion network for hyperspectral image classification
Recently, hyperspectral image (HSI) classification methods based on deep-learning have attracted widespread attention. Convolutional neural networks, as a crucial deep-learning technique, have exhibited outstanding performance in HSI classification. However, there are still some challenges, such as limited labeled samples, and feature extraction of complex land cover objects. To address these challenges, in this paper, we propose a multiscale cross-fusion network for HSI classification. It consists of three components: a spectral signatures extraction network, a spatial features extraction network and a classification network, which are utilized to extract spectral signatures, extract spatial contextual information and generate classification results, respectively. Specifically, the cross-branch multiscale convolutional block and the channel global contextual attention are integrated to extract spectral signatures, and the cross-hierarchy multiscale convolutional blocks and the spatial global contextual attention are combined to extract spatial features. Furthermore, special fusion strategies are proposed in these blocks to promote the interaction between features and achieve better feature connectivity. A series of experiments are conducted on three public HCI datasets, and the results show that the overall accuracy of the proposed network is 0.57%, 0.61%, and 0.3% higher than that of the state-of-the-art method on the PU, SV, and HH datasets, respectively.
期刊介绍:
The Egyptian Journal of Remote Sensing and Space Sciences (EJRS) encompasses a comprehensive range of topics within Remote Sensing, Geographic Information Systems (GIS), planetary geology, and space technology development, including theories, applications, and modeling. EJRS aims to disseminate high-quality, peer-reviewed research focusing on the advancement of remote sensing and GIS technologies and their practical applications for effective planning, sustainable development, and environmental resource conservation. The journal particularly welcomes innovative papers with broad scientific appeal.