Cuili Yao, Lin Feng, Yuqiu Kong, Lin Xiao, Tao Chen
{"title":"TranSal:用于RGB-D显著目标检测的深度引导变压器","authors":"Cuili Yao, Lin Feng, Yuqiu Kong, Lin Xiao, Tao Chen","doi":"10.1109/ICNISC57059.2022.00171","DOIUrl":null,"url":null,"abstract":"In recent years, RGB-D salient object detection (SOD) has attracted increased attention and has been widely employed in various computer vision applications. Fully convolutional networks dominate many RGB-D SOD tasks and have already achieved outstanding results. However, solving the cross-modality fusion of low-quality depth and RGB cues remains challenging. We present TranSal, an innovative network, depth-guided transformer for RGB-D SOD, based on the Transformer's fantastic performance in image recognition and segmentation. A dual-branch U-Net architecture is used in the proposed model. To begin, it uses ResNet-50 to extract multi-level RGB and depth features. Second, it employs Transformer layers to represent image sequentiality. Finally, the representation is projected into spatial order, and the multi-scale cross-modality characteristics are fused to generate an accurate saliency map using a depth-guided fusion subnetwork. TranSal can successfully mitigate the negative impacts of low-quality depth information and create a saliency map with clear contours and accurate semantics compared to previous models. Experiments and analyses on five large-scale benchmarks verify that TranSal achieves satisfactory performance compared to the recent state-of-the-art methods.","PeriodicalId":286467,"journal":{"name":"2022 8th Annual International Conference on Network and Information Systems for Computers (ICNISC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TranSal: Depth-guided Transformer for RGB-D Salient Object Detection\",\"authors\":\"Cuili Yao, Lin Feng, Yuqiu Kong, Lin Xiao, Tao Chen\",\"doi\":\"10.1109/ICNISC57059.2022.00171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, RGB-D salient object detection (SOD) has attracted increased attention and has been widely employed in various computer vision applications. Fully convolutional networks dominate many RGB-D SOD tasks and have already achieved outstanding results. However, solving the cross-modality fusion of low-quality depth and RGB cues remains challenging. We present TranSal, an innovative network, depth-guided transformer for RGB-D SOD, based on the Transformer's fantastic performance in image recognition and segmentation. A dual-branch U-Net architecture is used in the proposed model. To begin, it uses ResNet-50 to extract multi-level RGB and depth features. Second, it employs Transformer layers to represent image sequentiality. Finally, the representation is projected into spatial order, and the multi-scale cross-modality characteristics are fused to generate an accurate saliency map using a depth-guided fusion subnetwork. TranSal can successfully mitigate the negative impacts of low-quality depth information and create a saliency map with clear contours and accurate semantics compared to previous models. Experiments and analyses on five large-scale benchmarks verify that TranSal achieves satisfactory performance compared to the recent state-of-the-art methods.\",\"PeriodicalId\":286467,\"journal\":{\"name\":\"2022 8th Annual International Conference on Network and Information Systems for Computers (ICNISC)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 8th Annual International Conference on Network and Information Systems for Computers (ICNISC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICNISC57059.2022.00171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 8th Annual International Conference on Network and Information Systems for Computers (ICNISC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNISC57059.2022.00171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TranSal: Depth-guided Transformer for RGB-D Salient Object Detection
In recent years, RGB-D salient object detection (SOD) has attracted increased attention and has been widely employed in various computer vision applications. Fully convolutional networks dominate many RGB-D SOD tasks and have already achieved outstanding results. However, solving the cross-modality fusion of low-quality depth and RGB cues remains challenging. We present TranSal, an innovative network, depth-guided transformer for RGB-D SOD, based on the Transformer's fantastic performance in image recognition and segmentation. A dual-branch U-Net architecture is used in the proposed model. To begin, it uses ResNet-50 to extract multi-level RGB and depth features. Second, it employs Transformer layers to represent image sequentiality. Finally, the representation is projected into spatial order, and the multi-scale cross-modality characteristics are fused to generate an accurate saliency map using a depth-guided fusion subnetwork. TranSal can successfully mitigate the negative impacts of low-quality depth information and create a saliency map with clear contours and accurate semantics compared to previous models. Experiments and analyses on five large-scale benchmarks verify that TranSal achieves satisfactory performance compared to the recent state-of-the-art methods.