深度很重要:探索 RGB-D 的深度交互,实现交通场景中的语义分割

Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai
{"title":"深度很重要:探索 RGB-D 的深度交互,实现交通场景中的语义分割","authors":"Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai","doi":"arxiv-2409.07995","DOIUrl":null,"url":null,"abstract":"RGB-D has gradually become a crucial data source for understanding complex\nscenes in assisted driving. However, existing studies have paid insufficient\nattention to the intrinsic spatial properties of depth maps. This oversight\nsignificantly impacts the attention representation, leading to prediction\nerrors caused by attention shift issues. To this end, we propose a novel\nlearnable Depth interaction Pyramid Transformer (DiPFormer) to explore the\neffectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization\n(Depth SAO) as offset to represent real-world spatial relationships. Secondly,\nthe similarity in the feature space of RGB-D is learned by Depth Linear\nCross-Attention (Depth LCA) to clarify spatial differences at the pixel level.\nFinally, an MLP Decoder is utilized to effectively fuse multi-scale features\nfor meeting real-time requirements. Comprehensive experiments demonstrate that\nthe proposed DiPFormer significantly addresses the issue of attention\nmisalignment in both road detection (+7.5%) and semantic segmentation (+4.9% /\n+1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI\n(97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes\n(83.4% mIoU) datasets.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes\",\"authors\":\"Siyu Chen, Ting Han, Changshe Zhang, Weiquan Liu, Jinhe Su, Zongyue Wang, Guorong Cai\",\"doi\":\"arxiv-2409.07995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RGB-D has gradually become a crucial data source for understanding complex\\nscenes in assisted driving. However, existing studies have paid insufficient\\nattention to the intrinsic spatial properties of depth maps. This oversight\\nsignificantly impacts the attention representation, leading to prediction\\nerrors caused by attention shift issues. To this end, we propose a novel\\nlearnable Depth interaction Pyramid Transformer (DiPFormer) to explore the\\neffectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization\\n(Depth SAO) as offset to represent real-world spatial relationships. Secondly,\\nthe similarity in the feature space of RGB-D is learned by Depth Linear\\nCross-Attention (Depth LCA) to clarify spatial differences at the pixel level.\\nFinally, an MLP Decoder is utilized to effectively fuse multi-scale features\\nfor meeting real-time requirements. Comprehensive experiments demonstrate that\\nthe proposed DiPFormer significantly addresses the issue of attention\\nmisalignment in both road detection (+7.5%) and semantic segmentation (+4.9% /\\n+1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI\\n(97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes\\n(83.4% mIoU) datasets.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07995\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

RGB-D 已逐渐成为辅助驾驶中了解复杂场景的重要数据源。然而,现有研究对深度图的内在空间属性关注不够。这种疏忽严重影响了注意力表征,导致注意力转移问题造成预测误差。为此,我们提出了一种新颖的可学习深度交互金字塔转换器(Depth interaction Pyramid Transformer,DiPFormer)来探索深度的有效性。首先,我们引入深度空间感知优化(Depth Spatial-Aware Optimization,Depth SAO)作为偏移量来表示真实世界的空间关系。其次,通过深度线性交叉注意(Depth LinearCross-Attention,DCA)学习 RGB-D 特征空间中的相似性,以明确像素级的空间差异。最后,利用 MLP 解码器有效融合多尺度特征,以满足实时性要求。综合实验证明,所提出的 DiPFormer 显著解决了道路检测(+7.5%)和语义分割(+4.9% /+1.5%)任务中的注意力调整问题。DiPFormer 在 KITTI(在 KITTI 道路上的 F-score 为 97.57%,在 KITTI-360 上的 mIoU 为 68.74%)和 Cityscapes(mIoU 为 83.4%)数据集上取得了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes
RGB-D has gradually become a crucial data source for understanding complex scenes in assisted driving. However, existing studies have paid insufficient attention to the intrinsic spatial properties of depth maps. This oversight significantly impacts the attention representation, leading to prediction errors caused by attention shift issues. To this end, we propose a novel learnable Depth interaction Pyramid Transformer (DiPFormer) to explore the effectiveness of depth. Firstly, we introduce Depth Spatial-Aware Optimization (Depth SAO) as offset to represent real-world spatial relationships. Secondly, the similarity in the feature space of RGB-D is learned by Depth Linear Cross-Attention (Depth LCA) to clarify spatial differences at the pixel level. Finally, an MLP Decoder is utilized to effectively fuse multi-scale features for meeting real-time requirements. Comprehensive experiments demonstrate that the proposed DiPFormer significantly addresses the issue of attention misalignment in both road detection (+7.5%) and semantic segmentation (+4.9% / +1.5%) tasks. DiPFormer achieves state-of-the-art performance on the KITTI (97.57% F-score on KITTI road and 68.74% mIoU on KITTI-360) and Cityscapes (83.4% mIoU) datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively Multi-Person 3D Human Motion Forecasting with Scene Context Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Precise Forecasting of Sky Images Using Spatial Warping JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation Applications of Knowledge Distillation in Remote Sensing: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1