Lifan Zhou;Wenjie Xing;Jie Zhu;Yu Xia;Shan Zhong;Shengrong Gong
{"title":"HRSF-Net: A High-Resolution Strong Fusion Network for Pixel-Level Classification of the Thin-Stripped Target for Remote Sensing System","authors":"Lifan Zhou;Wenjie Xing;Jie Zhu;Yu Xia;Shan Zhong;Shengrong Gong","doi":"10.1109/JMASS.2023.3299330","DOIUrl":null,"url":null,"abstract":"High-resolution pixel-level classification of the roads and rivers in the remote sensing system has extremely important application value and has been a research focus which is received extensive attention from the remote sensing society. In recent years, deep convolutional neural networks (DCNNs) have been used in the pixel-level classification of remote sensing images, which has shown extraordinary performance. However, the traditional DCNNs mostly produce discontinuous and incomplete pixel-level classification results when dealing with thin-stripped roads and rivers. To solve the above problem, we put forward a high-resolution strong fusion network (abbreviated as HRSF-Net) which can keep the feature map at high resolution and minimize the texture information loss of the thin-stripped target caused by multiple downsampling operations. In addition, a pixel relationship enhancement and dual-channel attention (PRE-DCA) module is proposed to fully explore the strong correlation between the thin-stripped target pixels, and a hetero-resolution fusion (HRF) module is also proposed to better fuse the feature maps with different resolutions. The proposed HRSF-Net is examined on the two public remote sensing datasets. The ablation experimental result verifies the effectiveness of each module of the HRSF-Net. The comparative experimental result shows that the HRSF-Net has achieved mIoU of 79.05% and 64.46% on the two datasets, respectively, which both outperform some advanced pixel-level classification methods.","PeriodicalId":100624,"journal":{"name":"IEEE Journal on Miniaturization for Air and Space Systems","volume":"4 4","pages":"368-375"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal on Miniaturization for Air and Space Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10195987/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
High-resolution pixel-level classification of the roads and rivers in the remote sensing system has extremely important application value and has been a research focus which is received extensive attention from the remote sensing society. In recent years, deep convolutional neural networks (DCNNs) have been used in the pixel-level classification of remote sensing images, which has shown extraordinary performance. However, the traditional DCNNs mostly produce discontinuous and incomplete pixel-level classification results when dealing with thin-stripped roads and rivers. To solve the above problem, we put forward a high-resolution strong fusion network (abbreviated as HRSF-Net) which can keep the feature map at high resolution and minimize the texture information loss of the thin-stripped target caused by multiple downsampling operations. In addition, a pixel relationship enhancement and dual-channel attention (PRE-DCA) module is proposed to fully explore the strong correlation between the thin-stripped target pixels, and a hetero-resolution fusion (HRF) module is also proposed to better fuse the feature maps with different resolutions. The proposed HRSF-Net is examined on the two public remote sensing datasets. The ablation experimental result verifies the effectiveness of each module of the HRSF-Net. The comparative experimental result shows that the HRSF-Net has achieved mIoU of 79.05% and 64.46% on the two datasets, respectively, which both outperform some advanced pixel-level classification methods.