Diego Hidalgo-Carvajal, Carlos Magno Catharino Olsson Valle, Abdeldjallil Naceri, S. Haddadin
{"title":"Object-Centric Grasping Transferability: Linking Meshes to Postures","authors":"Diego Hidalgo-Carvajal, Carlos Magno Catharino Olsson Valle, Abdeldjallil Naceri, S. Haddadin","doi":"10.1109/Humanoids53995.2022.10000192","DOIUrl":null,"url":null,"abstract":"Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Humanoids53995.2022.10000192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Attaining human hand manipulation capabilities is a sought-after goal of robotic manipulation. Several works have focused on understanding and applying human manipulation insights in robotic applications. However, few considered objects as central pieces to increase the generalization properties of existing methods. In this study, we explore context-based grasping information transferability between objects by using mesh-based object representations. To do so, we empirically labeled, in a mesh point-wise manner, 10 grasping postures onto a set of 12 purposely selected objects. Subsequently, we trained our convolutional neural network (CNN) based architecture with the mesh representation of a single object, associating grasping postures to its local regions. We tested our network across multiple objects of distinct similarity values. Results show that our network can successfully estimate non-feasible grasping regions as well as feasible grasping postures. Our results suggest the existence of an abstract relation between the predicted context-based grasping postures and the geometrical properties of both the training and test objects. Our proposed approach aims to expand grasp learning research by linking local segmented meshes to postures. Such a concept can be applied to grasp new objects using anthropomorphic robot hands.