{"title":"Robotic grasping target detection based on domain randomization","authors":"Jiyuan Liu, Junqi Luo, Zhenyu Zhang, Daopeng Liu, Shanjun Zhang, Liucun Zhu","doi":"10.1109/ARACE56528.2022.00038","DOIUrl":null,"url":null,"abstract":"In recent years, deep learning has been a great success in robotic vision grasping, which is largely due to its adaptive learning capability and large-scale training samples. However, the hand-crafted datasets may suffer the dilemma of time-cost and quality. In this paper, a robot grasping target detection algorithm based on synthetic data is proposed. The training samples are generated quickly and accurately by domain randomization technique. Each RGB image of the domain randomized dataset contains complex backgrounds and randomly rotated detection targets, while the illumination of the scene and the occlusion of the targets are randomized to improve the generalization of the model, and finally we put the dataset into YOLOv3 for training. The YCB dataset is used as the training and testing samples. The experiments compare the detecting effects of the networks that are trained by YCB dataset and its synthetic data respectively. The results show that the dataset by domain randomization is consistent with the YCB dataset in terms of recognition accuracy, while the mAP of the dataset by domain randomization is improved by 10% compared to the YCB dataset, which further indicates that the synthetic dataset constructed by domain randomization can effectively improve the network learning effect and further improve the recognized performance of the target in complex scene.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARACE56528.2022.00038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, deep learning has been a great success in robotic vision grasping, which is largely due to its adaptive learning capability and large-scale training samples. However, the hand-crafted datasets may suffer the dilemma of time-cost and quality. In this paper, a robot grasping target detection algorithm based on synthetic data is proposed. The training samples are generated quickly and accurately by domain randomization technique. Each RGB image of the domain randomized dataset contains complex backgrounds and randomly rotated detection targets, while the illumination of the scene and the occlusion of the targets are randomized to improve the generalization of the model, and finally we put the dataset into YOLOv3 for training. The YCB dataset is used as the training and testing samples. The experiments compare the detecting effects of the networks that are trained by YCB dataset and its synthetic data respectively. The results show that the dataset by domain randomization is consistent with the YCB dataset in terms of recognition accuracy, while the mAP of the dataset by domain randomization is improved by 10% compared to the YCB dataset, which further indicates that the synthetic dataset constructed by domain randomization can effectively improve the network learning effect and further improve the recognized performance of the target in complex scene.