An anti-disturbing real time pose estimation method and system

Jian Zhou, Xiao-hu Zhang
{"title":"An anti-disturbing real time pose estimation method and system","authors":"Jian Zhou, Xiao-hu Zhang","doi":"10.1117/12.900564","DOIUrl":null,"url":null,"abstract":"Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photoelectronic Detection and Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.900564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种抗干扰实时姿态估计方法及系统
从二维图像到三维刚体的姿态估计需要一些已知的特征来跟踪。在实践中,有许多算法可以高精度地完成这项任务,但这些算法都存在特征丢失的问题。本文研究了已知特征数不可见甚至全部不可见时的姿态估计问题。首先,跟踪已知特征,计算当前图像和下一张图像中的姿态;其次,在当前图像和下一幅图像中自动检测出一些未知但很好的跟踪特征;第三,保留两幅图像中处于刚体上且能够相互匹配的未知特征;由于刚体的运动特性,除了以下两种情况外,刚体上未知特征的三维信息可以由刚体在两时刻的位姿及其在两幅图像中的二维信息来求解:第一种情况是相机和物体都没有相对运动,并且相机的焦距、主点等参数在两时刻都没有变化;二是两幅图像中没有共享的场景或没有匹配的特征。最后,由于第一次未知的特征现在是已知的,所以通过重复上述过程,即使一开始缺少已知的特征,也可以在接下来的图像中进行姿态估计。比较了Kanade-Lucas-Tomasi (KLT)特征、Scale Invariant feature Transform (SIFT)特征和Speed Up Robust feature (SURF)特征检测算法对姿态估计的鲁棒性,讨论了相机与刚体之间不同相对运动的紧凑性。利用图形处理单元(GPU)的并行计算对数百个特征进行提取和匹配,实现在中央处理器(CPU)上难以实现的实时姿态估计。与其他姿态估计方法相比,该方法可以在部分甚至全部已知特征丢失的情况下估计出相机与目标之间的姿态,并且具有快速响应的优点。该方法可广泛应用于视觉引导技术中,增强其智能化和泛化能力,在自主导航定位、未知环境下的机器人领域发挥重要作用。仿真和实验结果表明,该方法能够有效地抑制噪声,鲁棒地提取特征,达到实时性要求。理论分析和实验表明,该方法是合理有效的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design and characterization of radiation tolerant CMOS image sensor for space applications Measuring the steel tensile deformation based on linear CCD 3D hand and palmprint acquisition using full-field composite color fringe projection Research on surface free energy of electrowetting liquid zoom lens Research on inside surface of hollow reactor based on photoelectric detecting technique
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1