{"title":"Robot programming by demonstration with a monocular RGB camera","authors":"Kaimeng Wang, Te Tang","doi":"10.1108/ir-04-2022-0093","DOIUrl":null,"url":null,"abstract":"\nPurpose\nThis paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.\n\n\nDesign/methodology/approach\nThe proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.\n\n\nFindings\nExperiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.\n\n\nOriginality/value\nThe proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.\n","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot-The International Journal of Robotics Research and Application","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/ir-04-2022-0093","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 3
Abstract
Purpose
This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.
Design/methodology/approach
The proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.
Findings
Experiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.
Originality/value
The proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.
本文旨在提出一种新的机器人演示编程方法,该方法通过使用单个红绿蓝(RGB)相机跟踪演示者手部的6维(6D)姿势来生成机器人程序,而无需任何额外的传感器。设计/方法/方法所提出的方法通过跟踪人手和物体的运动,直接从单个人类演示中学习机器人的抓取和轨迹。为了从单个RGB图像中恢复物体的6D姿态,首先使用基于深度学习的方法检测物体的关键点,然后解决透视n点问题。首先将该方法扩展到非刚性手的6D姿态估计,将手指分离成多个与手关节相连的刚性骨骼。根据手与物体在二维空间中的相对位置,可以生成精确的机器人抓握。机器人末端执行器轨迹由手部运动生成,然后根据物体的起始和结束位置进行细化。在FANUC LR Mate 200iD机器人上进行了实验以验证所提出的方法。结果表明,利用单个RGB相机,通过观察人类演示一次,生成机器人程序是可行的。该方法提供了一种高效、低成本的单RGB相机机器人编程方法。提出了一种新的6D手部姿态估计方法,用于生成机器人抓取和轨迹。
期刊介绍:
Industrial Robot publishes peer reviewed research articles, technology reviews and specially commissioned case studies. Each issue includes high quality content covering all aspects of robotic technology, and reflecting the most interesting and strategically important research and development activities from around the world.
The journal’s policy of not publishing work that has only been tested in simulation means that only the very best and most practical research articles are included. This ensures that the material that is published has real relevance and value for commercial manufacturing and research organizations. Industrial Robot''s coverage includes, but is not restricted to:
Automatic assembly
Flexible manufacturing
Programming optimisation
Simulation and offline programming
Service robots
Autonomous robots
Swarm intelligence
Humanoid robots
Prosthetics and exoskeletons
Machine intelligence
Military robots
Underwater and aerial robots
Cooperative robots
Flexible grippers and tactile sensing
Robot vision
Teleoperation
Mobile robots
Search and rescue robots
Robot welding
Collision avoidance
Robotic machining
Surgical robots
Call for Papers 2020
AI for Autonomous Unmanned Systems
Agricultural Robot
Brain-Computer Interfaces for Human-Robot Interaction
Cooperative Robots
Robots for Environmental Monitoring
Rehabilitation Robots
Wearable Robotics/Exoskeletons.