A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching.

IF 3 Q2 ROBOTICS Frontiers in Robotics and AI Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI:10.3389/frobt.2024.1424036
Jose Moises Araya-Martinez, Vinicius Soares Matthiesen, Simon Bøgh, Jens Lambrecht, Rui Pimentel de Figueiredo
{"title":"A fast monocular 6D pose estimation method for textureless objects based on perceptual hashing and template matching.","authors":"Jose Moises Araya-Martinez, Vinicius Soares Matthiesen, Simon Bøgh, Jens Lambrecht, Rui Pimentel de Figueiredo","doi":"10.3389/frobt.2024.1424036","DOIUrl":null,"url":null,"abstract":"<p><p>Object pose estimation is essential for computer vision applications such as quality inspection, robotic bin picking, and warehouse logistics. However, this task often requires expensive equipment such as 3D cameras or Lidar sensors, as well as significant computational resources. Many state-of-the-art methods for 6D pose estimation depend on deep neural networks, which are computationally demanding and require GPUs for real-time performance. Moreover, they usually involve the collection and labeling of large training datasets, which is costly and time-consuming. In this study, we propose a template-based matching algorithm that utilizes a novel perceptual hashing method for binary images, enabling fast and robust pose estimation. This approach allows the automatic preselection of a subset of templates, significantly reducing inference time while maintaining similar accuracy. Our solution runs efficiently on multiple devices without GPU support, offering reduced runtime and high accuracy on cost-effective hardware. We benchmarked our proposed approach on a body-in-white automotive part and a widely used publicly available dataset. Our set of experiments on a synthetically generated dataset reveals a trade-off between accuracy and computation time superior to a previous work on the same automotive-production use case. Additionally, our algorithm efficiently utilizes all CPU cores and includes adjustable parameters for balancing computation time and accuracy, making it suitable for a wide range of applications where hardware cost and power efficiency are critical. For instance, with a rotation step of 10° in the template database, we achieve an average rotation error of <math><mrow><mn>10</mn> <mo>°</mo></mrow> </math> , matching the template quantization level, and an average translation error of 14% of the object's size, with an average processing time of <math><mrow><mn>0.3</mn> <mi>s</mi></mrow> </math> per image on a small form-factor NVIDIA AGX Orin device. We also evaluate robustness under partial occlusions (up to 10% occlusion) and noisy inputs (signal-to-noise ratios [SNRs] up to 10 dB), with only minor losses in accuracy. Additionally, we compare our method to state-of-the-art deep learning models on a public dataset. Although our algorithm does not outperform them in absolute accuracy, it provides a more favorable trade-off between accuracy and processing time, which is especially relevant to applications using resource-constrained devices.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1424036"},"PeriodicalIF":3.0000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750840/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2024.1424036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Object pose estimation is essential for computer vision applications such as quality inspection, robotic bin picking, and warehouse logistics. However, this task often requires expensive equipment such as 3D cameras or Lidar sensors, as well as significant computational resources. Many state-of-the-art methods for 6D pose estimation depend on deep neural networks, which are computationally demanding and require GPUs for real-time performance. Moreover, they usually involve the collection and labeling of large training datasets, which is costly and time-consuming. In this study, we propose a template-based matching algorithm that utilizes a novel perceptual hashing method for binary images, enabling fast and robust pose estimation. This approach allows the automatic preselection of a subset of templates, significantly reducing inference time while maintaining similar accuracy. Our solution runs efficiently on multiple devices without GPU support, offering reduced runtime and high accuracy on cost-effective hardware. We benchmarked our proposed approach on a body-in-white automotive part and a widely used publicly available dataset. Our set of experiments on a synthetically generated dataset reveals a trade-off between accuracy and computation time superior to a previous work on the same automotive-production use case. Additionally, our algorithm efficiently utilizes all CPU cores and includes adjustable parameters for balancing computation time and accuracy, making it suitable for a wide range of applications where hardware cost and power efficiency are critical. For instance, with a rotation step of 10° in the template database, we achieve an average rotation error of 10 ° , matching the template quantization level, and an average translation error of 14% of the object's size, with an average processing time of 0.3 s per image on a small form-factor NVIDIA AGX Orin device. We also evaluate robustness under partial occlusions (up to 10% occlusion) and noisy inputs (signal-to-noise ratios [SNRs] up to 10 dB), with only minor losses in accuracy. Additionally, we compare our method to state-of-the-art deep learning models on a public dataset. Although our algorithm does not outperform them in absolute accuracy, it provides a more favorable trade-off between accuracy and processing time, which is especially relevant to applications using resource-constrained devices.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于感知哈希和模板匹配的无纹理物体快速单目6D姿态估计方法。
物体姿态估计对于计算机视觉应用至关重要,如质量检测、机器人拣货和仓库物流。然而,这项任务通常需要昂贵的设备,如3D相机或激光雷达传感器,以及大量的计算资源。许多最先进的6D姿态估计方法依赖于深度神经网络,这对计算要求很高,并且需要gpu来实现实时性能。此外,它们通常涉及大型训练数据集的收集和标记,这既昂贵又耗时。在这项研究中,我们提出了一种基于模板的匹配算法,该算法利用一种新的感知哈希方法对二值图像进行匹配,从而实现快速和鲁棒的姿态估计。这种方法允许对模板子集进行自动预选,在保持相似准确性的同时显著减少了推理时间。我们的解决方案在没有GPU支持的情况下有效地运行在多个设备上,在经济高效的硬件上提供更短的运行时间和高精度。我们在一个白色车身的汽车部件和一个广泛使用的公开数据集上对我们提出的方法进行了基准测试。我们在合成生成的数据集上的一组实验表明,在精度和计算时间之间的权衡优于之前在相同汽车生产用例上的工作。此外,我们的算法有效地利用了所有CPU内核,并包括可调参数,以平衡计算时间和准确性,使其适用于硬件成本和功率效率至关重要的广泛应用。例如,在模板数据库中旋转10°时,我们实现了10°的平均旋转误差,与模板量化水平相匹配,平均平移误差为对象尺寸的14%,在小尺寸NVIDIA AGX Orin设备上,每张图像的平均处理时间为0.3 s。我们还评估了部分遮挡(高达10%遮挡)和噪声输入(信噪比[SNRs]高达10 dB)下的鲁棒性,准确性只有轻微损失。此外,我们将我们的方法与公共数据集上最先进的深度学习模型进行了比较。虽然我们的算法在绝对精度方面没有超过它们,但它在精度和处理时间之间提供了更有利的权衡,这与使用资源受限设备的应用程序特别相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.50
自引率
5.90%
发文量
355
审稿时长
14 weeks
期刊介绍: Frontiers in Robotics and AI publishes rigorously peer-reviewed research covering all theory and applications of robotics, technology, and artificial intelligence, from biomedical to space robotics.
期刊最新文献
Robots, ledgers, and RevPAR: a blockchain-enabled AI-robotics conceptual model for sustainable hotel revenue and asset management. RAISE-FER: a massive cross-dataset augmented facial expression dataset. Transforming customer experience in social robotics through explainable and interpretable artificial intelligence over a decade. When AI takes the wheel: AI-defined vehicles principles and pitfalls. Effects of praise from a social robot on task persistence in 18- to 24-month-old children.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1