François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru
{"title":"Learning prohibited and authorised grasping locations from a few demonstrations","authors":"François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru","doi":"10.1109/RO-MAN47096.2020.9223486","DOIUrl":null,"url":null,"abstract":"Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.