基于事件数据的自组织地图多视图三维重建

Lea Steffen, Stefan Ulbrich, A. Roennau, R. Dillmann
{"title":"基于事件数据的自组织地图多视图三维重建","authors":"Lea Steffen, Stefan Ulbrich, A. Roennau, R. Dillmann","doi":"10.1109/ICAR46387.2019.8981569","DOIUrl":null,"url":null,"abstract":"Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"110 1","pages":"501-508"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Multi-View 3D Reconstruction with Self-Organizing Maps on Event-Based Data\",\"authors\":\"Lea Steffen, Stefan Ulbrich, A. Roennau, R. Dillmann\",\"doi\":\"10.1109/ICAR46387.2019.8981569\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.\",\"PeriodicalId\":6606,\"journal\":{\"name\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"volume\":\"110 1\",\"pages\":\"501-508\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 19th International Conference on Advanced Robotics (ICAR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAR46387.2019.8981569\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR46387.2019.8981569","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

深度感知对于包括机器人、无人机和自动驾驶在内的许多应用都至关重要。视觉和相机将3D世界映射为2D表示,失去了代表深度的维度。从二维图像中恢复三维信息的一种方法是记录和连接多个视点的数据。在立体声设置的情况下,获得4D数据。现有的恢复三维信息的方法在计算上是昂贵的。我们提出了一种新的、更直观的方法来从基于事件的立体数据中恢复三维物体,通过使用自组织映射来解决对应问题,并建立类似体素网格的结构。由于我们的方法在计算上也很昂贵,因此可以通过大规模并行化来解决性能问题。此外,相对较小的体素网格使其成为内存友好的解决方案。这种技术非常强大,因为它不需要任何外在和内在相机参数的先验知识。相反,这些参数和镜头畸变是隐式学习的。我们不仅不需要平行相机设置,因为许多现有的方法,我们甚至不需要任何关于对齐的信息。我们在定性分析和寻找图像对应中评估了我们的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-View 3D Reconstruction with Self-Organizing Maps on Event-Based Data
Depth perception is crucial for many applications including robotics, UAV and autonomous driving. The visual sense, as well as cameras, map the 3D world on a 2D representation, losing the dimension representing depth. A way to recover 3D information from 2D images is to record and join data from multiple viewpoints. In case of a stereo setup, 4D data is gained. Existing methods to recover 3D information are computationally expensive. We propose a new, more intuitive method to recover 3D objects out of event-based stereo data, by using a Self-Organizing Map to solve the correspondence problem and establish a structure similar to a voxel grid. Our approach, as it is also computationally expensive, copes with performance issues by massive parallelization. Furthermore, the relatively small voxel grid makes this a memory friendly solution. This technique is very powerful as it does not need any prior knowledge of extrinsic and intrinsic camera parameters. Instead, those parameters and also the lens distortion are learned implicitly. Not only do we not require a parallel camera setup, as many existing methods, we do not even need any information about the alignment at all. We evaluated our method in a qualitative analysis and finding image correspondences.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Evaluation of Domain Randomization Techniques for Transfer Learning Robotito: programming robots from preschool to undergraduate school level A Novel Approach for Parameter Extraction of an NMPC-based Visual Follower Model Automated Conflict Resolution of Lane Change Utilizing Probability Collectives Estimating and Localizing External Forces Applied on Flexible Instruments by Shape Sensing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1