Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai
{"title":"深度很重要:探索 RGB-D 的深度交互,实现交通场景中的语义分割","authors":"Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai","doi":"arxiv-2409.07995","DOIUrl":null,"url":null,"abstract":"RGB-D has gradually become a crucial data source for understanding complex\nscenes in assisted driving. However, existing studies have paid insufficient\nattention to the intrinsic spatial properties of depth maps. This oversight\nsignificantly impacts the attention representation, leading to prediction\nerrors caused by attention shift issues. To this end, we propose a novel\nlearnable Depth interaction Pyramid Transformer (DiPFormer) to explore the\neffectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization\n(Depth SAO) as offset to represent real-world spatial relationships. Secondly,\nthe similarity in the feature space of RGB-D is learned by Depth Linear\nCross-Attention (Depth LCA) to clarify spatial differences at the pixel level.\nFinally, an MLP Decoder is utilized to effectively fuse multi-scale features\nfor meeting real-time requirements. Comprehensive experiments demonstrate that\nthe proposed DiPFormer significantly addresses the issue of attention\nmisalignment in both road detection (+7.5%) and semantic segmentation (+4.9% /\n+1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI\n(97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes\n(83.4% mIoU) datasets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes\",\"authors\":\"Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai\",\"doi\":\"arxiv-2409.07995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-D has gradually become a crucial data source for understanding complex\\nscenes in assisted driving. However, existing studies have paid insufficient\\nattention to the intrinsic spatial properties of depth maps. This oversight\\nsignificantly impacts the attention representation, leading to prediction\\nerrors caused by attention shift issues. To this end, we propose a novel\\nlearnable Depth interaction Pyramid Transformer (DiPFormer) to explore the\\neffectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization\\n(Depth SAO) as offset to represent real-world spatial relationships. Secondly,\\nthe similarity in the feature space of RGB-D is learned by Depth Linear\\nCross-Attention (Depth LCA) to clarify spatial differences at the pixel level.\\nFinally, an MLP Decoder is utilized to effectively fuse multi-scale features\\nfor meeting real-time requirements. Comprehensive experiments demonstrate that\\nthe proposed DiPFormer significantly addresses the issue of attention\\nmisalignment in both road detection (+7.5%) and semantic segmentation (+4.9% /\\n+1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI\\n(97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes\\n(83.4% mIoU) datasets.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07995\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes
RGB-D has gradually become a crucial data source for understanding complex
scenes in assisted driving. However, existing studies have paid insufficient
attention to the intrinsic spatial properties of depth maps. This oversight
significantly impacts the attention representation, leading to prediction
errors caused by attention shift issues. To this end, we propose a novel
learnable Depth interaction Pyramid Transformer (DiPFormer) to explore the
effectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization
(Depth SAO) as offset to represent real-world spatial relationships. Secondly,
the similarity in the feature space of RGB-D is learned by Depth Linear
Cross-Attention (Depth LCA) to clarify spatial differences at the pixel level.
Finally, an MLP Decoder is utilized to effectively fuse multi-scale features
for meeting real-time requirements. Comprehensive experiments demonstrate that
the proposed DiPFormer significantly addresses the issue of attention
misalignment in both road detection (+7.5%) and semantic segmentation (+4.9% /
+1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI
(97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes
(83.4% mIoU) datasets.