A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss
{"title":"Learning object-specific vision-based manipulation in virtual environments","authors":"A. Matsikis, T. Zoumpoulidis, F.H. Broicher, K. Kraiss","doi":"10.1109/ROMAN.2002.1045623","DOIUrl":null,"url":null,"abstract":"In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.","PeriodicalId":222409,"journal":{"name":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2002.1045623","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper a method for learning object-specific vision-based manipulation is described. The proposed approach uses a virtual environment containing models of the objects and the manipulator with an eye-in-hand camera to simplify and automate the training procedure. An object with a form that requires a unique final gripper position and orientation was used to train and test the implemented algorithms. A series of smooth paths leading to the final position are generated based on a typical path defined by an operator. Images and corresponding manipulator positions along the produced paths are gathered in the virtual environment and used for the training of a vision-based controller. The controller uses a structure of radial-basis function (RBF) networks and has to execute a long reaching movement that guides the manipulator to the final position so that afterwards only minor justification of the gripper is needed to complete the grasp.