Tracking-by-synthesis using point features and pyramidal blurring

Gilles Simon
{"title":"Tracking-by-synthesis using point features and pyramidal blurring","authors":"Gilles Simon","doi":"10.1109/ismar.2011.6092373","DOIUrl":null,"url":null,"abstract":"Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ismar.2011.6092373","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

Abstract

Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用点特征和金字塔模糊的合成跟踪
合成跟踪是一种很有前途的基于无标记视觉的相机跟踪方法,特别适用于增强现实应用。特别是,它无漂移,视点不变,易于与GPS和惯性传感器等物理传感器结合使用。虽然边缘特征已经在跟踪合成框架内成功使用,但据我们所知,点特征仍然从未被使用过。我们认为这是由于实时角点检测器通常在相机图像和渲染纹理之间的可重复性很弱。在本文中,我们比较了常用的FAST、Harris和SURF兴趣点检测器在视图合成中的可重复性。我们表明,在渲染纹理中添加深度模糊可以极大地提高FAST和Harris角检测器的可重复性(在我们的实验中高达100%),这非常有帮助,例如,在手机上运行合成跟踪。我们提出了一种使用预校准的深度响应曲线来模拟渲染图像上的深度模糊的方法。为了满足性能要求,采用了基于著名的MIP映射技术的金字塔方法。我们还提出了一种校准深度响应曲线的原始方法,该方法适用于任何类型的聚焦镜头,并且在编程工作方面是免费的,一旦实现了合成跟踪算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Indoor positioning and navigation for mobile AR Light factorization for mixed-frequency shadows in augmented reality 3D high dynamic range display system Adaptive camera-based color mapping for mixed-reality applications Image-based clothes transfer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1