{"title":"用于 RGB-D 突出物体检测的渐进式跨层融合网络","authors":"Jianbao Li, Chen Pan, Yilin Zheng, Dongping Zhang","doi":"10.1016/j.jvcir.2024.104268","DOIUrl":null,"url":null,"abstract":"<div><p>Depth maps can provide supplementary information for salient object detection (SOD) and perform better in handling complex scenes. Most existing RGB-D methods only utilize deep cues at the same level, and few methods focus on the information linkage between cross-level features. In this study, we propose a Progressive Cross-level Fusion Network (PCF-Net). It ensures the cross-flow of cross-level features by gradually exploring deeper features, which promotes the interaction and fusion of information between different-level features. First, we designed a Cross-Level Guide Cross-Modal Fusion Module (CGCF) that utilizes the spatial information of upper-level features to suppress modal feature noise and to guide lower-level features for cross-modal feature fusion. Next, the proposed Semantic Enhancement Module (SEM) and Local Enhancement Module (LEM) are used to further introduce deeper features, enhance the high-level semantic information and low-level structural information of cross-modal features, and use self-modality attention refinement to improve the enhancement effect. Finally, the multi-scale aggregation decoder mines enhanced feature information in multi-scale spaces and effectively integrates cross-scale features. In this study, we conducted numerous experiments to demonstrate that the proposed PCF-Net outperforms 16 of the most advanced methods on six popular RGB-D SOD datasets.</p></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"104 ","pages":"Article 104268"},"PeriodicalIF":2.6000,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Progressive cross-level fusion network for RGB-D salient object detection\",\"authors\":\"Jianbao Li, Chen Pan, Yilin Zheng, Dongping Zhang\",\"doi\":\"10.1016/j.jvcir.2024.104268\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Depth maps can provide supplementary information for salient object detection (SOD) and perform better in handling complex scenes. Most existing RGB-D methods only utilize deep cues at the same level, and few methods focus on the information linkage between cross-level features. In this study, we propose a Progressive Cross-level Fusion Network (PCF-Net). It ensures the cross-flow of cross-level features by gradually exploring deeper features, which promotes the interaction and fusion of information between different-level features. First, we designed a Cross-Level Guide Cross-Modal Fusion Module (CGCF) that utilizes the spatial information of upper-level features to suppress modal feature noise and to guide lower-level features for cross-modal feature fusion. Next, the proposed Semantic Enhancement Module (SEM) and Local Enhancement Module (LEM) are used to further introduce deeper features, enhance the high-level semantic information and low-level structural information of cross-modal features, and use self-modality attention refinement to improve the enhancement effect. Finally, the multi-scale aggregation decoder mines enhanced feature information in multi-scale spaces and effectively integrates cross-scale features. In this study, we conducted numerous experiments to demonstrate that the proposed PCF-Net outperforms 16 of the most advanced methods on six popular RGB-D SOD datasets.</p></div>\",\"PeriodicalId\":54755,\"journal\":{\"name\":\"Journal of Visual Communication and Image Representation\",\"volume\":\"104 \",\"pages\":\"Article 104268\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Visual Communication and Image Representation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1047320324002244\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320324002244","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Progressive cross-level fusion network for RGB-D salient object detection
Depth maps can provide supplementary information for salient object detection (SOD) and perform better in handling complex scenes. Most existing RGB-D methods only utilize deep cues at the same level, and few methods focus on the information linkage between cross-level features. In this study, we propose a Progressive Cross-level Fusion Network (PCF-Net). It ensures the cross-flow of cross-level features by gradually exploring deeper features, which promotes the interaction and fusion of information between different-level features. First, we designed a Cross-Level Guide Cross-Modal Fusion Module (CGCF) that utilizes the spatial information of upper-level features to suppress modal feature noise and to guide lower-level features for cross-modal feature fusion. Next, the proposed Semantic Enhancement Module (SEM) and Local Enhancement Module (LEM) are used to further introduce deeper features, enhance the high-level semantic information and low-level structural information of cross-modal features, and use self-modality attention refinement to improve the enhancement effect. Finally, the multi-scale aggregation decoder mines enhanced feature information in multi-scale spaces and effectively integrates cross-scale features. In this study, we conducted numerous experiments to demonstrate that the proposed PCF-Net outperforms 16 of the most advanced methods on six popular RGB-D SOD datasets.
期刊介绍:
The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.