Disambiguating the recognition of 3D objects

Gutemberg Guerra-Filho
{"title":"Disambiguating the recognition of 3D objects","authors":"Gutemberg Guerra-Filho","doi":"10.1109/CVPR.2009.5206683","DOIUrl":null,"url":null,"abstract":"We propose novel algorithms for the detection, segmentation, recognition, and pose estimation of three-dimensional objects. Our approach initially infers geometric primitives to describe the set of 3D objects. A hierarchical structure is constructed to organize the objects in terms of shared primitives and relations between different primitives in the same object. This structure is shown to disambiguate the object models and to improve recognition rates. The primitives are obtained through our new Invariant Hough Transform. This algorithm uses geometric invariants to compute relations for subsets of points in a specific object. Each relation is stored in a hash table according to the invariant value. The hash table is used to find potential corresponding points between objects. With point matches, pose estimation is achieved by building a probability distribution of transformations. We evaluate our methods with experiments using synthetic and real 3D objects.","PeriodicalId":386532,"journal":{"name":"2009 IEEE Conference on Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Conference on Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.2009.5206683","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

We propose novel algorithms for the detection, segmentation, recognition, and pose estimation of three-dimensional objects. Our approach initially infers geometric primitives to describe the set of 3D objects. A hierarchical structure is constructed to organize the objects in terms of shared primitives and relations between different primitives in the same object. This structure is shown to disambiguate the object models and to improve recognition rates. The primitives are obtained through our new Invariant Hough Transform. This algorithm uses geometric invariants to compute relations for subsets of points in a specific object. Each relation is stored in a hash table according to the invariant value. The hash table is used to find potential corresponding points between objects. With point matches, pose estimation is achieved by building a probability distribution of transformations. We evaluate our methods with experiments using synthetic and real 3D objects.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
消除三维物体识别的歧义
我们提出了三维物体的检测、分割、识别和姿态估计的新算法。我们的方法首先推断几何原语来描述3D对象集。层次结构是根据共享原语和同一对象中不同原语之间的关系来组织对象的。这种结构可以消除目标模型的歧义,提高识别率。这些原语是通过新的不变霍夫变换得到的。该算法使用几何不变量来计算特定对象中点子集的关系。每个关系根据不变值存储在哈希表中。哈希表用于查找对象之间可能对应的点。对于点匹配,姿态估计是通过建立变换的概率分布来实现的。我们用合成的和真实的三维物体来评估我们的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
On bias correction for geometric parameter estimation in computer vision Learning multi-modal densities on Discriminative Temporal Interaction Manifold for group activity recognition Nonrigid shape recovery by Gaussian process regression Combining powerful local and global statistics for texture description Observe locally, infer globally: A space-time MRF for detecting abnormal activities with incremental updates
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1