N. Kimura, Ryo Sakai, Shinichi Katsumata, Nobuhiro Chihara
{"title":"Simultaneously Determining Target Object and Transport Velocity for Manipulator and Moving Vehicle in Piece-Picking Operation","authors":"N. Kimura, Ryo Sakai, Shinichi Katsumata, Nobuhiro Chihara","doi":"10.1109/COASE.2019.8843236","DOIUrl":null,"url":null,"abstract":"We propose a deep learning-based method that simultaneously determines a target object to be picked up by an autonomous manipulator and the velocity of an automated guided vehicle (AGV) that passes in front of the manipulator while the AGV carries a carton case containing the target and other objects. Our method can efficiently perform automated piece-picking operations in warehouses without the AGV needing to pause in front of the manipulator. In our method, for preparing supervised data sets with color images of objects that are randomly piled up in the carton case, a simulator checks whether each object is “pickable” or not by trying to plan the manipulator’s motion to have its hand reach the object while avoiding surrounding obstacles by using the depth images in consideration of the carton case’s movement and velocity. Then, we make each of multiple deep convolutional neural networks (DCNNs) corresponding to multiple levels of velocity learn to detect grasp points for only pickable objects from an RGB image. In our experimental test, using our method, a prototype of the system successfully picked ordered objects up without the AGV pausing while the AGV changed its velocity depending on the layout of the objects in the carton case.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"23 1","pages":"1066-1073"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2019.8843236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We propose a deep learning-based method that simultaneously determines a target object to be picked up by an autonomous manipulator and the velocity of an automated guided vehicle (AGV) that passes in front of the manipulator while the AGV carries a carton case containing the target and other objects. Our method can efficiently perform automated piece-picking operations in warehouses without the AGV needing to pause in front of the manipulator. In our method, for preparing supervised data sets with color images of objects that are randomly piled up in the carton case, a simulator checks whether each object is “pickable” or not by trying to plan the manipulator’s motion to have its hand reach the object while avoiding surrounding obstacles by using the depth images in consideration of the carton case’s movement and velocity. Then, we make each of multiple deep convolutional neural networks (DCNNs) corresponding to multiple levels of velocity learn to detect grasp points for only pickable objects from an RGB image. In our experimental test, using our method, a prototype of the system successfully picked ordered objects up without the AGV pausing while the AGV changed its velocity depending on the layout of the objects in the carton case.