Adversarial imitation learning-based network for category-level 6D object pose estimation

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Machine Vision and Applications Pub Date : 2024-08-12 DOI:10.1007/s00138-024-01592-6
Shantong Sun, Xu Bao, Aryan Kaushik
{"title":"Adversarial imitation learning-based network for category-level 6D object pose estimation","authors":"Shantong Sun, Xu Bao, Aryan Kaushik","doi":"10.1007/s00138-024-01592-6","DOIUrl":null,"url":null,"abstract":"<p>Category-level 6D object pose estimation is a very fundamental and key research in computer vision. In order to get rid of the dependence on the object 3D models, analysis-by-synthesis object pose estimation methods have recently been widely studied. While these methods have certain improvements in generalization, the accuracy of category-level object pose estimation still needs to be improved. In this paper, we propose a category-level 6D object pose estimation network based on adversarial imitation learning, named AIL-Net. AIL-Net adopts the state-action distribution matching criterion and is able to perform expert actions that have not appeared in the dataset. This prevents the object pose estimation from falling into a bad state. We further design a framework for estimating object pose through generative adversarial imitation learning. This method is able to distinguish between expert policy and imitation policy in AIL-Net. Experimental results show that our approach achieves competitive category-level object pose estimation performance on REAL275 dataset and Cars dataset.</p>","PeriodicalId":51116,"journal":{"name":"Machine Vision and Applications","volume":"9 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Vision and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00138-024-01592-6","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Category-level 6D object pose estimation is a very fundamental and key research in computer vision. In order to get rid of the dependence on the object 3D models, analysis-by-synthesis object pose estimation methods have recently been widely studied. While these methods have certain improvements in generalization, the accuracy of category-level object pose estimation still needs to be improved. In this paper, we propose a category-level 6D object pose estimation network based on adversarial imitation learning, named AIL-Net. AIL-Net adopts the state-action distribution matching criterion and is able to perform expert actions that have not appeared in the dataset. This prevents the object pose estimation from falling into a bad state. We further design a framework for estimating object pose through generative adversarial imitation learning. This method is able to distinguish between expert policy and imitation policy in AIL-Net. Experimental results show that our approach achieves competitive category-level object pose estimation performance on REAL275 dataset and Cars dataset.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于对抗性模仿学习的网络,用于类别级 6D 物体姿态估计
类别级 6D 物体姿态估计是计算机视觉领域一项非常基础和关键的研究。为了摆脱对物体三维模型的依赖,通过合成分析进行物体姿态估计的方法近年来被广泛研究。虽然这些方法在泛化方面有一定的改进,但类别级物体姿态估计的精度仍有待提高。本文提出了一种基于对抗模仿学习的类别级 6D 物体姿态估计网络,命名为 AIL-Net。AIL-Net 采用状态-动作分布匹配准则,能够执行数据集中未出现过的专家动作。这可以防止物体姿态估计陷入不良状态。我们进一步设计了一个通过生成式对抗模仿学习来估计物体姿态的框架。这种方法能够区分 AIL-Net 中的专家策略和模仿策略。实验结果表明,我们的方法在 REAL275 数据集和 Cars 数据集上实现了具有竞争力的类别级物体姿态估计性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Machine Vision and Applications
Machine Vision and Applications 工程技术-工程:电子与电气
CiteScore
6.30
自引率
3.00%
发文量
84
审稿时长
8.7 months
期刊介绍: Machine Vision and Applications publishes high-quality technical contributions in machine vision research and development. Specifically, the editors encourage submittals in all applications and engineering aspects of image-related computing. In particular, original contributions dealing with scientific, commercial, industrial, military, and biomedical applications of machine vision, are all within the scope of the journal. Particular emphasis is placed on engineering and technology aspects of image processing and computer vision. The following aspects of machine vision applications are of interest: algorithms, architectures, VLSI implementations, AI techniques and expert systems for machine vision, front-end sensing, multidimensional and multisensor machine vision, real-time techniques, image databases, virtual reality and visualization. Papers must include a significant experimental validation component.
期刊最新文献
A novel key point based ROI segmentation and image captioning using guidance information Specular Surface Detection with Deep Static Specular Flow and Highlight Removing cloud shadows from ground-based solar imagery Underwater image object detection based on multi-scale feature fusion Object Recognition Consistency in Regression for Active Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1