Autonomous robotic bin picking platform generated from human demonstration and YOLOv5

IF 2.4 3区 工程技术 Q3 ENGINEERING, MANUFACTURING Journal of Manufacturing Science and Engineering-transactions of The Asme Pub Date : 2023-08-04 DOI:10.1115/1.4063107
Jinho Park, C. Han, M. Jun, Huitaek Yun
{"title":"Autonomous robotic bin picking platform generated from human demonstration and YOLOv5","authors":"Jinho Park, C. Han, M. Jun, Huitaek Yun","doi":"10.1115/1.4063107","DOIUrl":null,"url":null,"abstract":"\n Vision-based robots have been utilized for pick-and-place operations by their ability to find object poses. As they progress into handling a variety of objects with cluttered state, more flexible and lightweight operations have been presented. In this paper, an autonomous robotic bin-picking platform which combines human demonstration with a collaborative robot for the flexibility of the objects and YOLOv5 neural network model for the faster object localization without prior CAD models or dataset in the training. After simple human demonstration of which target object to pick and place, the raw color and depth images were refined, and the one on top of the bin was utilized to create synthetic images and annotations for the YOLOv5 model. To pick up the target object, the point cloud was lifted using the depth data corresponding to the result of the trained YOLOv5 model, and the object pose was estimated through matching them by Iterative Closest Points (ICP) algorithm. After picking up the target object, the robot placed it where the user defined in the previous human demonstration stage. From the result of experiments with four types of objects and four human demonstrations, it took a total of 0.5 seconds to recognize the target object and estimate the object pose. The success rate of object detection was 95.6%, and the pick-and-place motion of all the found objects were successful.","PeriodicalId":16299,"journal":{"name":"Journal of Manufacturing Science and Engineering-transactions of The Asme","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Manufacturing Science and Engineering-transactions of The Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063107","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

Abstract

Vision-based robots have been utilized for pick-and-place operations by their ability to find object poses. As they progress into handling a variety of objects with cluttered state, more flexible and lightweight operations have been presented. In this paper, an autonomous robotic bin-picking platform which combines human demonstration with a collaborative robot for the flexibility of the objects and YOLOv5 neural network model for the faster object localization without prior CAD models or dataset in the training. After simple human demonstration of which target object to pick and place, the raw color and depth images were refined, and the one on top of the bin was utilized to create synthetic images and annotations for the YOLOv5 model. To pick up the target object, the point cloud was lifted using the depth data corresponding to the result of the trained YOLOv5 model, and the object pose was estimated through matching them by Iterative Closest Points (ICP) algorithm. After picking up the target object, the robot placed it where the user defined in the previous human demonstration stage. From the result of experiments with four types of objects and four human demonstrations, it took a total of 0.5 seconds to recognize the target object and estimate the object pose. The success rate of object detection was 95.6%, and the pick-and-place motion of all the found objects were successful.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
由人类演示和YOLOv5生成的自主机器人拣货平台
基于视觉的机器人已经被用于拾取和放置操作,因为它们有能力找到物体的姿势。随着它们发展到处理各种杂乱状态的对象,出现了更加灵活和轻量级的操作。本文提出了一种基于人工演示与协作机器人相结合的自动捡筒机器人平台,该平台具有物体的灵活性,而YOLOv5神经网络模型在训练中无需预先使用CAD模型或数据集即可实现更快的物体定位。在简单的人工演示要挑选和放置哪个目标对象之后,对原始颜色和深度图像进行细化,并使用bin顶部的图像为YOLOv5模型创建合成图像和注释。为了提取目标物体,利用训练好的YOLOv5模型结果对应的深度数据对点云进行提升,并通过迭代最近点(ICP)算法对点云进行匹配来估计目标物体的姿态。在拿起目标物体后,机器人将其放置在用户在之前的人类演示阶段中定义的位置。从四种物体类型和四次人体演示的实验结果来看,识别目标物体和估计物体姿态总共需要0.5秒。目标检测成功率为95.6%,发现的所有目标的拾取运动均成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
20.00%
发文量
126
审稿时长
12 months
期刊介绍: Areas of interest including, but not limited to: Additive manufacturing; Advanced materials and processing; Assembly; Biomedical manufacturing; Bulk deformation processes (e.g., extrusion, forging, wire drawing, etc.); CAD/CAM/CAE; Computer-integrated manufacturing; Control and automation; Cyber-physical systems in manufacturing; Data science-enhanced manufacturing; Design for manufacturing; Electrical and electrochemical machining; Grinding and abrasive processes; Injection molding and other polymer fabrication processes; Inspection and quality control; Laser processes; Machine tool dynamics; Machining processes; Materials handling; Metrology; Micro- and nano-machining and processing; Modeling and simulation; Nontraditional manufacturing processes; Plant engineering and maintenance; Powder processing; Precision and ultra-precision machining; Process engineering; Process planning; Production systems optimization; Rapid prototyping and solid freeform fabrication; Robotics and flexible tooling; Sensing, monitoring, and diagnostics; Sheet and tube metal forming; Sustainable manufacturing; Tribology in manufacturing; Welding and joining
期刊最新文献
CONTINUOUS STEREOLITHOGRAPHY 3D PRINTING OF MULTI-NETWORK HYDROGELS IN TRIPLY PERIODIC MINIMAL STRUCTURES (TPMS) WITH TUNABLE MECHANICAL STRENGTH FOR ENERGY ABSORPTION A Review of Prospects and Opportunities in Disassembly with Human-Robot Collaboration The Effect of Microstructure on the Machinability of Natural Fiber Reinforced Plastic Composites: A Novel Explainable Machine Learning (XML) Approach A Digital Twin-based environment-adaptive assignment method for human-robot collaboration Combining Flexible and Sustainable Design Principles for Evaluating Designs: Textile Recycling Application
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1