DAGM-Mono:用于单目三维重建的可变形注意力引导建模

Youshaa Murhij, Dmitry Yudin
{"title":"DAGM-Mono:用于单目三维重建的可变形注意力引导建模","authors":"Youshaa Murhij,&nbsp;Dmitry Yudin","doi":"10.3103/S1060992X2470005X","DOIUrl":null,"url":null,"abstract":"<p>Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 2","pages":"144 - 156"},"PeriodicalIF":1.0000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DAGM-Mono: Deformable Attention-Guided Modeling for Monocular 3D Reconstruction\",\"authors\":\"Youshaa Murhij,&nbsp;Dmitry Yudin\",\"doi\":\"10.3103/S1060992X2470005X\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.</p>\",\"PeriodicalId\":721,\"journal\":{\"name\":\"Optical Memory and Neural Networks\",\"volume\":\"33 2\",\"pages\":\"144 - 156\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optical Memory and Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.3103/S1060992X2470005X\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optical Memory and Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.3103/S1060992X2470005X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPTICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要从单目图像中进行精确的三维姿态估计和形状重建是自动驾驶领域的一项具有挑战性的任务。我们的工作引入了一种新方法来解决车辆的这一任务,该方法被称为单目三维重建的可变形注意力引导建模(DAGM-Mono)。我们提出的解决方案利用可变形注意力机制解决了详细形状重建的难题。具体来说,在给定二维基元的情况下,DAGM-Mono 利用可变形注意力引导建模重建车辆形状,同时考虑检测到的物体与车辆形状先验之间的相关性。我们的方法引入了两个额外的损失函数:倒角距离(CD)和层次倒角距离,通过额外捕捉不同尺度的细粒度形状细节来增强形状重建过程。我们的双情境可变形注意力框架可估算三维物体姿态,同时捕捉物体间关系和场景情境。在 ApolloCar3D 数据集上的实验表明,DAGM-Mono 实现了最先进的性能,并显著提高了成熟的单目三维物体检测器的性能。代码和数据可在以下网站公开:https://github.com/YoushaaMurhij/DAGM-Mono。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DAGM-Mono: Deformable Attention-Guided Modeling for Monocular 3D Reconstruction

Accurate 3D pose estimation and shape reconstruction from monocular images is a challenging task in the field of autonomous driving. Our work introduces a novel approach to solve this task for vehicles called Deformable Attention-Guided Modeling for Monocular 3D Reconstruction (DAGM-Mono). Our proposed solution addresses the challenge of detailed shape reconstruction by leveraging deformable attention mechanisms. Specifically, given 2D primitives, DAGM-Mono reconstructs vehicles shapes using deformable attention-guided modeling, considering the relevance between detected objects and vehicle shape priors. Our method introduces two additional loss functions: Chamfer Distance (CD) and Hierarchical Chamfer Distance to enhance the process of shape reconstruction by additionally capturing fine-grained shape details at different scales. Our bi-contextual deformable attention framework estimates 3D object pose, capturing both inter-object relations and scene context. Experiments on the ApolloCar3D dataset demonstrate that DAGM-Mono achieves state-of-the-art performance and significantly enhances the performance of mature monocular 3D object detectors. Code and data are publicly available at: https://github.com/YoushaaMurhij/DAGM-Mono.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.50
自引率
11.10%
发文量
25
期刊介绍: The journal covers a wide range of issues in information optics such as optical memory, mechanisms for optical data recording and processing, photosensitive materials, optical, optoelectronic and holographic nanostructures, and many other related topics. Papers on memory systems using holographic and biological structures and concepts of brain operation are also included. The journal pays particular attention to research in the field of neural net systems that may lead to a new generation of computional technologies by endowing them with intelligence.
期刊最新文献
uSF: Learning Neural Semantic Field with Uncertainty Two Frequency-Division Demultiplexing Using Photonic Waveguides by the Presence of Two Geometric Defects Enhancement of Neural Network Performance with the Use of Two Novel Activation Functions: modExp and modExpm Automated Lightweight Descriptor Generation for Hyperspectral Image Analysis Accuracy and Performance Analysis of the 1/t Wang-Landau Algorithm in the Joint Density of States Estimation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1