{"title":"约束任务的示范学习*","authors":"Dimitrios Papageorgiou, Z. Doulgeri","doi":"10.1109/RO-MAN47096.2020.9223579","DOIUrl":null,"url":null,"abstract":"In many industrial applications robot’s motion has to be subjected to spatial constraints imposed by the geometry of the task, e.g. motion of the end-effector on a surface. Current learning by demonstration methods encode the motion either in the Cartesian space of the end-effector, or in the configuration space of the robot. In those cases, the spatial generalization of the motion does not guarantee that the motion will in any case respect the spatial constraints of the task, as no knowledge of those constraints is exploited. In this work, a novel approach for encoding a kinematic behavior is proposed, which takes advantage of such a knowledge and guarantees that the motion will, in any case, satisfy the spatial constraints and the motion pattern will not be distorted. The proposed approach is compared with respect to its ability for spatial generalization, to two different dynamical system based approaches implemented on the Cartesian space via experiments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"39 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Learning by demonstration for constrained tasks*\",\"authors\":\"Dimitrios Papageorgiou, Z. Doulgeri\",\"doi\":\"10.1109/RO-MAN47096.2020.9223579\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In many industrial applications robot’s motion has to be subjected to spatial constraints imposed by the geometry of the task, e.g. motion of the end-effector on a surface. Current learning by demonstration methods encode the motion either in the Cartesian space of the end-effector, or in the configuration space of the robot. In those cases, the spatial generalization of the motion does not guarantee that the motion will in any case respect the spatial constraints of the task, as no knowledge of those constraints is exploited. In this work, a novel approach for encoding a kinematic behavior is proposed, which takes advantage of such a knowledge and guarantees that the motion will, in any case, satisfy the spatial constraints and the motion pattern will not be distorted. The proposed approach is compared with respect to its ability for spatial generalization, to two different dynamical system based approaches implemented on the Cartesian space via experiments.\",\"PeriodicalId\":383722,\"journal\":{\"name\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"39 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN47096.2020.9223579\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223579","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
In many industrial applications robot’s motion has to be subjected to spatial constraints imposed by the geometry of the task, e.g. motion of the end-effector on a surface. Current learning by demonstration methods encode the motion either in the Cartesian space of the end-effector, or in the configuration space of the robot. In those cases, the spatial generalization of the motion does not guarantee that the motion will in any case respect the spatial constraints of the task, as no knowledge of those constraints is exploited. In this work, a novel approach for encoding a kinematic behavior is proposed, which takes advantage of such a knowledge and guarantees that the motion will, in any case, satisfy the spatial constraints and the motion pattern will not be distorted. The proposed approach is compared with respect to its ability for spatial generalization, to two different dynamical system based approaches implemented on the Cartesian space via experiments.