Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246962
Kaname Narukawa, T. Yoshiike, Kenta Tanaka, Mitsuhide Kuroda
In this paper, a new real-time collision detection method based on the one class support vector machine method for the safe movement of humanoid robots is proposed. To generate a representational model for collision detection requires only normal movement data and does not require collision data which is not easy to obtain. With this method, a real-time emergency stop function for humanoid robots is activated during collisions while walking quadruped. It is important for the operator who operates the robot remotely to be able to interpret collision information properly. To support the operator with information to understand situations, localization of a collision point is also implemented with a multi class support vector machine method.
{"title":"Real-time collision detection based on one class SVM for safe movement of humanoid robot","authors":"Kaname Narukawa, T. Yoshiike, Kenta Tanaka, Mitsuhide Kuroda","doi":"10.1109/HUMANOIDS.2017.8246962","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246962","url":null,"abstract":"In this paper, a new real-time collision detection method based on the one class support vector machine method for the safe movement of humanoid robots is proposed. To generate a representational model for collision detection requires only normal movement data and does not require collision data which is not easy to obtain. With this method, a real-time emergency stop function for humanoid robots is activated during collisions while walking quadruped. It is important for the operator who operates the robot remotely to be able to interpret collision information properly. To support the operator with information to understand situations, localization of a collision point is also implemented with a multi class support vector machine method.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132892162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246974
Markus Grotz, P. Kaiser, E. Aksoy, Fabian Paus, T. Asfour
Semantic understanding of unstructured environments plays an essential role in the autonomous planning and execution of whole-body humanoid locomotion and manipulation tasks. We introduce a new graph-based and data-driven method for semantic representation of unknown environments based on visual sensor data streams. The proposed method extends our previous work, in which loco-manipulation scene affordances are detected in a fully unsupervised manner. We build a geometric primitive-based model of the perceived scene and assign interaction possibilities, i.e. affordances, to the individual primitives. The major contribution of this paper is the enrichment of the extracted scene representation with semantic object information through spatio-temporal fusion of primitives during the perception. To this end, we combine the primitive-based scene representation with object detection methods to identify higher semantic structures in the scene. The qualitative and quantitative evaluation of the proposed method in various experiments in simulation and on the humanoid robot ARMAR-III demonstrates the effectiveness of the approach.
{"title":"Graph-based visual semantic perception for humanoid robots","authors":"Markus Grotz, P. Kaiser, E. Aksoy, Fabian Paus, T. Asfour","doi":"10.1109/HUMANOIDS.2017.8246974","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246974","url":null,"abstract":"Semantic understanding of unstructured environments plays an essential role in the autonomous planning and execution of whole-body humanoid locomotion and manipulation tasks. We introduce a new graph-based and data-driven method for semantic representation of unknown environments based on visual sensor data streams. The proposed method extends our previous work, in which loco-manipulation scene affordances are detected in a fully unsupervised manner. We build a geometric primitive-based model of the perceived scene and assign interaction possibilities, i.e. affordances, to the individual primitives. The major contribution of this paper is the enrichment of the extracted scene representation with semantic object information through spatio-temporal fusion of primitives during the perception. To this end, we combine the primitive-based scene representation with object detection methods to identify higher semantic structures in the scene. The qualitative and quantitative evaluation of the proposed method in various experiments in simulation and on the humanoid robot ARMAR-III demonstrates the effectiveness of the approach.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246901
Shintaro Komatsu, Youhei Kakiuchi, Shunichi Nozawa, Yuta Kojio, Fumihito Sugai, K. Okada, M. Inaba
Simultaneous control of position and force of robots is one of the difficult and important problems in the field of robotics. Even if we can get a desirable positional trajectory of robots' end effectors or tools that they use, it is not easy to know how much force we should apply in order to execute planned tasks. We propose a method that enables robots to exert the required force to successfully carry out tasks. In this paper, we introduce a method to realize online updating of the force applied to the environment through tools and modification of Center of Gravity (CoG) based on the reference force. The update direction of the force is set in advance considering the interaction between tools and environment. We take manipulation of a shovel as an example. To verify the effect of our method, a humanoid robot JAXON demonstrates the soil-digging task under various conditions.
{"title":"Tool force adaptation in soil-digging task for humanoid robot","authors":"Shintaro Komatsu, Youhei Kakiuchi, Shunichi Nozawa, Yuta Kojio, Fumihito Sugai, K. Okada, M. Inaba","doi":"10.1109/HUMANOIDS.2017.8246901","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246901","url":null,"abstract":"Simultaneous control of position and force of robots is one of the difficult and important problems in the field of robotics. Even if we can get a desirable positional trajectory of robots' end effectors or tools that they use, it is not easy to know how much force we should apply in order to execute planned tasks. We propose a method that enables robots to exert the required force to successfully carry out tasks. In this paper, we introduce a method to realize online updating of the force applied to the environment through tools and modification of Center of Gravity (CoG) based on the reference force. The update direction of the force is set in advance considering the interaction between tools and environment. We take manipulation of a shovel as an example. To verify the effect of our method, a humanoid robot JAXON demonstrates the soil-digging task under various conditions.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8239541
Christian R. G. Dreher, Nicklas Kulp, Christian Mandery, Mirko Wächter, T. Asfour
There have been many proposals for algorithms segmenting human whole-body motion in the literature. However, the wide range of use cases, datasets, and quality measures that were used for the evaluation render the comparison of algorithms challenging. In this paper, we introduce a framework that puts motion segmentation algorithms on a unified testing ground and provides a possibility to allow comparing them. The testing ground features both a set of quality measures known from the literature and a novel approach tailored to the evaluation of motion segmentation algorithms, termed Integrated Kernel approach. Datasets of motion recordings, provided with a ground truth, are included as well. They are labelled in a new way, which hierarchically organises the ground truth, to cover different use cases that segmentation algorithms can possess. The framework and datasets are publicly available and are intended to represent a service for the community regarding the comparison and evaluation of existing and new motion segmentation algorithms.
{"title":"A framework for evaluating motion segmentation algorithms","authors":"Christian R. G. Dreher, Nicklas Kulp, Christian Mandery, Mirko Wächter, T. Asfour","doi":"10.1109/HUMANOIDS.2017.8239541","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8239541","url":null,"abstract":"There have been many proposals for algorithms segmenting human whole-body motion in the literature. However, the wide range of use cases, datasets, and quality measures that were used for the evaluation render the comparison of algorithms challenging. In this paper, we introduce a framework that puts motion segmentation algorithms on a unified testing ground and provides a possibility to allow comparing them. The testing ground features both a set of quality measures known from the literature and a novel approach tailored to the evaluation of motion segmentation algorithms, termed Integrated Kernel approach. Datasets of motion recordings, provided with a ground truth, are included as well. They are labelled in a new way, which hierarchically organises the ground truth, to cover different use cases that segmentation algorithms can possess. The framework and datasets are publicly available and are intended to represent a service for the community regarding the comparison and evaluation of existing and new motion segmentation algorithms.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124028460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8239530
Takumi Kamioka, Hiroyuki Kaneko, Mitsuhide Kuroda, C. Tanaka, Shinya Shirokura, M. Takeda, T. Yoshiike
Re-planning of gait trajectory is a crucial ability to compensate for external disturbances. To date, a large number of methods for re-planning footsteps and timing have been proposed. However, robots with the ability to change locomotion from walking to running or from walking to hopping were never proposed. In this paper, we propose a method for replanning not only for footsteps and timing but also locomotion mode which consists of walking, running and hopping. The re-planning method of locomotion mode consists of parallel computing and a ranking system with a novel cost function. To validate the method, we conducted push recovery experiments which were pushing in the forward direction when walking on the spot and pushing in the lateral direction when walking in the forward direction. Results of experiments showed that the proposed algorithm effectively compensated for external disturbances by making a locomotion transition.
{"title":"Dynamic gait transition between walking, running and hopping for push recovery","authors":"Takumi Kamioka, Hiroyuki Kaneko, Mitsuhide Kuroda, C. Tanaka, Shinya Shirokura, M. Takeda, T. Yoshiike","doi":"10.1109/HUMANOIDS.2017.8239530","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8239530","url":null,"abstract":"Re-planning of gait trajectory is a crucial ability to compensate for external disturbances. To date, a large number of methods for re-planning footsteps and timing have been proposed. However, robots with the ability to change locomotion from walking to running or from walking to hopping were never proposed. In this paper, we propose a method for replanning not only for footsteps and timing but also locomotion mode which consists of walking, running and hopping. The re-planning method of locomotion mode consists of parallel computing and a ranking system with a novel cost function. To validate the method, we conducted push recovery experiments which were pushing in the forward direction when walking on the spot and pushing in the lateral direction when walking in the forward direction. Results of experiments showed that the proposed algorithm effectively compensated for external disturbances by making a locomotion transition.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132134802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246956
M. Blancas, V. Vouloutsi, Samuel Fernando, Martí Sánchez-Fibla, R. Zucca, T. Prescott, A. Mura, P. Verschure
The use of robots as educational partners has been extensively explored, but less is known about the required characteristics these robots should have to meet children's expectations. Thus the purpose of this study is to analyze children's assumptions regarding morphology, functionality, and body features, among others, that robots should have to interact with them. To do so, we analyzed 142 drawings from 9 to 10 years old children and their answers to a survey provided after interacting with different robotic platforms. The main results convey on a gender-less robot with anthropomorphic (but machine-like) characteristics.
{"title":"Analyzing children's expectations from robotic companions in educational settings","authors":"M. Blancas, V. Vouloutsi, Samuel Fernando, Martí Sánchez-Fibla, R. Zucca, T. Prescott, A. Mura, P. Verschure","doi":"10.1109/HUMANOIDS.2017.8246956","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246956","url":null,"abstract":"The use of robots as educational partners has been extensively explored, but less is known about the required characteristics these robots should have to meet children's expectations. Thus the purpose of this study is to analyze children's assumptions regarding morphology, functionality, and body features, among others, that robots should have to interact with them. To do so, we analyzed 142 drawings from 9 to 10 years old children and their answers to a survey provided after interacting with different robotic platforms. The main results convey on a gender-less robot with anthropomorphic (but machine-like) characteristics.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128160513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humanoid robots encounter high falling risks when they walk or operate in an uncertain environment. In this paper, we propose a biomimetic mechanism for the upper limb of a humanoid robot that provides shock resistance when the robot falls forward. This biomimetic mechanism is based on viscoelasticity, and was modeled on human bones and muscles to achieve supporting and buffering. We install a series elastic component within the robot's elbow and also install a viscoelastically active pneumatically actuated impact protection device. We perform the falling forward experiments using our experimental platform, and we employ encoder, IMU, air gauge and F-T sensor to collect the experimental data. Based on the analysis of the experimental data, we conclude that the proposed biomimetic mechanism which is modeled on actual human bones and muscles can support the robot body, absorb the falling impact and against falling damage.
{"title":"Biomimetic upper limb mechanism of humanoid robot for shock resistance based on viscoelasticity","authors":"Zezheng Zhang, Huaxin Liu, Zhangguo Yu, Xuechao Chen, Qiang Huang, Qinqin Zhou, Zhaoyang Cai, X. Guo, Weimin Zhang","doi":"10.1109/HUMANOIDS.2017.8246939","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246939","url":null,"abstract":"Humanoid robots encounter high falling risks when they walk or operate in an uncertain environment. In this paper, we propose a biomimetic mechanism for the upper limb of a humanoid robot that provides shock resistance when the robot falls forward. This biomimetic mechanism is based on viscoelasticity, and was modeled on human bones and muscles to achieve supporting and buffering. We install a series elastic component within the robot's elbow and also install a viscoelastically active pneumatically actuated impact protection device. We perform the falling forward experiments using our experimental platform, and we employ encoder, IMU, air gauge and F-T sensor to collect the experimental data. Based on the analysis of the experimental data, we conclude that the proposed biomimetic mechanism which is modeled on actual human bones and muscles can support the robot body, absorb the falling impact and against falling damage.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116903819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246900
Chuanyu Yang, Taku Komura, Zhibin Li
This paper presents a hierarchical framework based on deep reinforcement learning that naturally acquires control policies that are capable of performing balancing behaviours such as ankle push-offs for humanoid robots, without explicit human design of controllers. Only the reward for training the neural network is specifically formulated based on the physical principles and quantities, and hence explainable. The successful emergence of human-comparable behaviours through the deep reinforcement learning demonstrates the feasibility of using an AI-based approach for humanoid motion control in a unified framework. Moreover, the balance strategies learned by reinforcement learning provides a larger range of disturbance rejection than that of the zero moment point based methods, suggesting a research direction of using learning-based controls to explore the optimal performance.
{"title":"Emergence of human-comparable balancing behaviours by deep reinforcement learning","authors":"Chuanyu Yang, Taku Komura, Zhibin Li","doi":"10.1109/HUMANOIDS.2017.8246900","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246900","url":null,"abstract":"This paper presents a hierarchical framework based on deep reinforcement learning that naturally acquires control policies that are capable of performing balancing behaviours such as ankle push-offs for humanoid robots, without explicit human design of controllers. Only the reward for training the neural network is specifically formulated based on the physical principles and quantities, and hence explainable. The successful emergence of human-comparable behaviours through the deep reinforcement learning demonstrates the feasibility of using an AI-based approach for humanoid motion control in a unified framework. Moreover, the balance strategies learned by reinforcement learning provides a larger range of disturbance rejection than that of the zero moment point based methods, suggesting a research direction of using learning-based controls to explore the optimal performance.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123711286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246931
Janelle Blankenburg, S. Banisetty, S. P. H. Alinodehi, Luke Fraser, David Feil-Seifer, M. Nicolescu, M. Nicolescu
This paper addresses the problem of task allocation for multi-robot systems that perform tasks with complex, hierarchical representations which contain different types of ordering constraints and multiple paths of execution. We propose a distributed multi-robot control architecture that addresses the above challenges and makes the following contributions: i) it allows for on-line, dynamic allocation of robots to various steps of the task, ii) it ensures that the collaborative robot system will obey all of the task constraints and iii) it allows for opportunistic, flexible task execution given different environmental conditions. This architecture uses a distributed messaging system to allow the robots to communicate. Each robot uses its own state and team member states to keep track of the progress on a given task and identify which subtasks to perform next using an activation spreading mechanism. We demonstrate the proposed architecture on a team of two humanoid robots (a PR2 and a Baxter) performing hierarchical tasks.
{"title":"A distributed control architecture for collaborative multi-robot task allocation","authors":"Janelle Blankenburg, S. Banisetty, S. P. H. Alinodehi, Luke Fraser, David Feil-Seifer, M. Nicolescu, M. Nicolescu","doi":"10.1109/HUMANOIDS.2017.8246931","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246931","url":null,"abstract":"This paper addresses the problem of task allocation for multi-robot systems that perform tasks with complex, hierarchical representations which contain different types of ordering constraints and multiple paths of execution. We propose a distributed multi-robot control architecture that addresses the above challenges and makes the following contributions: i) it allows for on-line, dynamic allocation of robots to various steps of the task, ii) it ensures that the collaborative robot system will obey all of the task constraints and iii) it allows for opportunistic, flexible task execution given different environmental conditions. This architecture uses a distributed messaging system to allow the robots to communicate. Each robot uses its own state and team member states to keep track of the progress on a given task and identify which subtasks to perform next using an activation spreading mechanism. We demonstrate the proposed architecture on a team of two humanoid robots (a PR2 and a Baxter) performing hierarchical tasks.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129758121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246971
V. Joukov, D. Kulić
Humans still outperform robots in most manipulation and locomotion tasks. Research suggests that humans minimize a task specific cost function when performing movements. In this paper we present a Gaussian Process based method to learn the underlying cost function, without making assumptions on its structure, and reproduce the demonstrated movement on a robot using a linear model predictive control framework. We show that the learned cost function can be used to prioritize between tracking and additional cost functions based on exemplar variance, and satisfy task and joint space constraints. Tuning the weighting between learned position and velocity costs produces trajectories of the desired shape even in the presence of constraints. The approach is validated in simulation with a simple 2dof manipulator showing joint and task space tracking and with a 4dof manipulator reproducing trajectories based on a human handwriting dataset.
{"title":"Gaussian process based model predictive controller for imitation learning","authors":"V. Joukov, D. Kulić","doi":"10.1109/HUMANOIDS.2017.8246971","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246971","url":null,"abstract":"Humans still outperform robots in most manipulation and locomotion tasks. Research suggests that humans minimize a task specific cost function when performing movements. In this paper we present a Gaussian Process based method to learn the underlying cost function, without making assumptions on its structure, and reproduce the demonstrated movement on a robot using a linear model predictive control framework. We show that the learned cost function can be used to prioritize between tracking and additional cost functions based on exemplar variance, and satisfy task and joint space constraints. Tuning the weighting between learned position and velocity costs produces trajectories of the desired shape even in the presence of constraints. The approach is validated in simulation with a simple 2dof manipulator showing joint and task space tracking and with a 4dof manipulator reproducing trajectories based on a human handwriting dataset.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116577822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}