bootstrap实时自我运动估计和场景建模

Xiang Zhang, Yakup Genç
{"title":"bootstrap实时自我运动估计和场景建模","authors":"Xiang Zhang, Yakup Genç","doi":"10.1109/3DIM.2005.25","DOIUrl":null,"url":null,"abstract":"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Bootstrapped real-time ego motion estimation and scene modeling\",\"authors\":\"Xiang Zhang, Yakup Genç\",\"doi\":\"10.1109/3DIM.2005.25\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.\",\"PeriodicalId\":170883,\"journal\":{\"name\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DIM.2005.25\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DIM.2005.25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在未知环境中估计移动摄像机的运动对于从建成重建到增强现实的许多应用都是必不可少的。这是一个具有挑战性的问题,特别是当需要实时性能时。我们的方法是估计摄像机的运动,同时重建场景中最显著的视觉特征的形状和外观。在我们的三维重建过程中,我们利用光流跟踪技术对每一帧的视觉特征进行跟踪,从而获得对应关系。基于光流的跟踪方法在跟踪显著特征方面存在局限性。通常较大的平移运动和甚至适度的旋转运动都可能导致漂移。我们建议通过构建可靠重构特征周围的地标表示来增强基于流的跟踪。重构特征点周围的平面补丁提供匹配信息,防止基于流的特征跟踪中的漂移,并允许在具有大基线的帧之间建立对应关系。选择性和周期性的这种对应映射极大地改善了场景和运动重建,同时坚持实时要求。实验证明,该方法计算精度高,计算效率高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Bootstrapped real-time ego motion estimation and scene modeling
Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A complete U-V-disparity study for stereovision based 3D driving environment analysis Simultaneous determination of registration and deformation parameters among 3D range images 3D digitization of a large model of imperial Rome Evaluating collinearity constraint for automatic range image registration Realistic human head modeling with multi-view hairstyle reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1