{"title":"One image for one strategy: human grasping with deep reinforcement based on small-sample representative data","authors":"Fei Wang, Manyi Shi, Chao Chen, Jinbiao Zhu, Yue Liu, Hao Chu","doi":"10.1007/s10489-024-05919-8","DOIUrl":null,"url":null,"abstract":"<p>As the first step in grasping operations, vision-guided grasping actions play a crucial role in enabling intelligent robots to perform complex interactive tasks. In order to solve the difficulties in data set preparation and consumption of computing resources before and during training network, we introduce a method of training human grasping strategies based on small sample representative data sets, and learn a human grasping strategy through only one depth image. Our key idea is to use the entire human grasping area instead of multiple grasping gestures so that we can greatly reduce the preparation of dataset. Then the grasping strategy is trained through the q-learning framework, the agent is allowed to continuously explore the environment so that it can overcome lack of data annotation and prediction in early stage of the visual network, then successfully map the human strategy into visual prediction. Considering the widespread clutter environment in real tasks, we introduce push actions and adopt a staged reward function to make it conducive to the grasping. Finally we learned the human grasping strategy and applied it successfully, and stably executed it on objects that not seen before, improved the convergence speed and grasping effect while reducing the consumption of computing resources. We conducted experiments on a Doosan robotic arm equipped with an Intel Realsense camera and a two-finger gripper, and achieved human strategy grasping with a high success rate in cluttered scenes.</p>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 1","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-024-05919-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As the first step in grasping operations, vision-guided grasping actions play a crucial role in enabling intelligent robots to perform complex interactive tasks. In order to solve the difficulties in data set preparation and consumption of computing resources before and during training network, we introduce a method of training human grasping strategies based on small sample representative data sets, and learn a human grasping strategy through only one depth image. Our key idea is to use the entire human grasping area instead of multiple grasping gestures so that we can greatly reduce the preparation of dataset. Then the grasping strategy is trained through the q-learning framework, the agent is allowed to continuously explore the environment so that it can overcome lack of data annotation and prediction in early stage of the visual network, then successfully map the human strategy into visual prediction. Considering the widespread clutter environment in real tasks, we introduce push actions and adopt a staged reward function to make it conducive to the grasping. Finally we learned the human grasping strategy and applied it successfully, and stably executed it on objects that not seen before, improved the convergence speed and grasping effect while reducing the consumption of computing resources. We conducted experiments on a Doosan robotic arm equipped with an Intel Realsense camera and a two-finger gripper, and achieved human strategy grasping with a high success rate in cluttered scenes.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.