A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement

IF 6.8 Q1 AUTOMATION & CONTROL SYSTEMS Advanced intelligent systems (Weinheim an der Bergstrasse, Germany) Pub Date : 2024-07-28 DOI:10.1002/aisy.202400110
Mahmoud Abdulsalam, Kenan Ahiska, Nabil Aouf
{"title":"A Transformer-Based Network for Full Object Pose Estimation with Depth Refinement","authors":"Mahmoud Abdulsalam,&nbsp;Kenan Ahiska,&nbsp;Nabil Aouf","doi":"10.1002/aisy.202400110","DOIUrl":null,"url":null,"abstract":"<p>In response to increasing demand for robotics manipulation, accurate vision-based full pose estimation is essential. While convolutional neural networks-based approaches have been introduced, the quest for higher performance continues, especially for precise robotics manipulation, including in the Agri-robotics domain. This article proposes an improved transformer-based pipeline for full pose estimation, incorporating a Depth Refinement Module. Operating solely on monocular images, the architecture features an innovative Lighter Depth Estimation Network using a Feature Pyramid with an up-sampling method for depth prediction. A Transformer-based Detection Network with additional prediction heads is employed to directly regress object centers and predict the full poses of the target objects. A novel Depth Refinement Module is then utilized alongside the predicted centers, full poses, and depth patches to refine the accuracy of the estimated poses. The performance of this pipeline is extensively compared with other state-of-the-art methods, and the results are analyzed for fruit picking applications. The results demonstrate that the pipeline improves the accuracy of pose estimation to up to 90.79% compared to other methods available in the literature.</p>","PeriodicalId":93858,"journal":{"name":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","volume":null,"pages":null},"PeriodicalIF":6.8000,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aisy.202400110","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced intelligent systems (Weinheim an der Bergstrasse, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aisy.202400110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In response to increasing demand for robotics manipulation, accurate vision-based full pose estimation is essential. While convolutional neural networks-based approaches have been introduced, the quest for higher performance continues, especially for precise robotics manipulation, including in the Agri-robotics domain. This article proposes an improved transformer-based pipeline for full pose estimation, incorporating a Depth Refinement Module. Operating solely on monocular images, the architecture features an innovative Lighter Depth Estimation Network using a Feature Pyramid with an up-sampling method for depth prediction. A Transformer-based Detection Network with additional prediction heads is employed to directly regress object centers and predict the full poses of the target objects. A novel Depth Refinement Module is then utilized alongside the predicted centers, full poses, and depth patches to refine the accuracy of the estimated poses. The performance of this pipeline is extensively compared with other state-of-the-art methods, and the results are analyzed for fruit picking applications. The results demonstrate that the pipeline improves the accuracy of pose estimation to up to 90.79% compared to other methods available in the literature.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于变换器的网络,可通过深度细化实现全物体姿态估计
为满足日益增长的机器人操纵需求,基于视觉的精确全姿态估计至关重要。虽然已经推出了基于卷积神经网络的方法,但对更高性能的追求仍在继续,特别是在精确机器人操纵方面,包括农业机器人领域。本文提出了一种基于变压器的全姿态估计改进管道,其中包含深度细化模块。该架构仅在单目图像上运行,采用创新的轻型深度估计网络,使用特征金字塔和上采样方法进行深度预测。基于变压器的检测网络带有额外的预测头,可直接回归物体中心并预测目标物体的完整姿态。然后,一个新颖的深度细化模块与预测中心、全姿态和深度补丁一起使用,以细化估计姿态的准确性。该管道的性能与其他最先进的方法进行了广泛比较,并对水果采摘应用的结果进行了分析。结果表明,与文献中的其他方法相比,该管道将姿势估计的准确性提高了 90.79%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
0
审稿时长
4 weeks
期刊最新文献
Masthead Reconstructing Soft Robotic Touch via In-Finger Vision A Cable-Actuated Soft Manipulator for Dexterous Grasping Based on Deep Reinforcement Learning Liquid Metal Chameleon Tongues: Modulating Surface Tension and Phase Transition to Enable Bioinspired Soft Actuators Reprogrammable, Recyclable Origami Robots Controlled by Magnetic Fields
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1