{"title":"用于 RGB-D 语义分割的自增强特征融合","authors":"Pengcheng Xiang;Baochen Yao;Zefeng Jiang;Chengbin Peng","doi":"10.1109/LSP.2024.3475352","DOIUrl":null,"url":null,"abstract":"Effectively fusing depth and RGB information to fully leverage their complementary strengths is essential for advancing RGB-D semantic segmentation. However, when fusing with RGB information, traditional methods often overlook noises in depth data, presuming that they are of high accuracy. To resolve this issue, we propose a self-enhanced feature fusion network (SEFnet) for RGB-D semantic segmentation in this work. It mainly comprises three steps. Firstly, RGB and depth embeddings from the initial layers of the network are fused together. Secondly, the fused features are enhanced by pure RGB embeddings and are progressively guided by semantic edge labels to suppress irrelevant features. Finally, the enhanced features are combined with high-level RGB features and are fed into a normalizing flow decoder to obtain segmentation results. Experimental results demonstrate that the proposed approach can provide accurate predictions, outperforming state-of-the-art methods on benchmark datasets.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"31 ","pages":"3015-3019"},"PeriodicalIF":3.2000,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Enhanced Feature Fusion for RGB-D Semantic Segmentation\",\"authors\":\"Pengcheng Xiang;Baochen Yao;Zefeng Jiang;Chengbin Peng\",\"doi\":\"10.1109/LSP.2024.3475352\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Effectively fusing depth and RGB information to fully leverage their complementary strengths is essential for advancing RGB-D semantic segmentation. However, when fusing with RGB information, traditional methods often overlook noises in depth data, presuming that they are of high accuracy. To resolve this issue, we propose a self-enhanced feature fusion network (SEFnet) for RGB-D semantic segmentation in this work. It mainly comprises three steps. Firstly, RGB and depth embeddings from the initial layers of the network are fused together. Secondly, the fused features are enhanced by pure RGB embeddings and are progressively guided by semantic edge labels to suppress irrelevant features. Finally, the enhanced features are combined with high-level RGB features and are fed into a normalizing flow decoder to obtain segmentation results. Experimental results demonstrate that the proposed approach can provide accurate predictions, outperforming state-of-the-art methods on benchmark datasets.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"31 \",\"pages\":\"3015-3019\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10706844/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10706844/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Self-Enhanced Feature Fusion for RGB-D Semantic Segmentation
Effectively fusing depth and RGB information to fully leverage their complementary strengths is essential for advancing RGB-D semantic segmentation. However, when fusing with RGB information, traditional methods often overlook noises in depth data, presuming that they are of high accuracy. To resolve this issue, we propose a self-enhanced feature fusion network (SEFnet) for RGB-D semantic segmentation in this work. It mainly comprises three steps. Firstly, RGB and depth embeddings from the initial layers of the network are fused together. Secondly, the fused features are enhanced by pure RGB embeddings and are progressively guided by semantic edge labels to suppress irrelevant features. Finally, the enhanced features are combined with high-level RGB features and are fed into a normalizing flow decoder to obtain segmentation results. Experimental results demonstrate that the proposed approach can provide accurate predictions, outperforming state-of-the-art methods on benchmark datasets.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.