Pub Date : 2019-06-29DOI: 10.1109/Humanoids43949.2019.9035026
Adam Conkey, Tucker Hermans
A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a “good” demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration. The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named “Greatest Mahalanobis Distance.” We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.
概率运动原语(Probabilistic Movement Primitive, ProMP)定义了轨迹上的分布和相关的反馈策略。promp通常从人类演示中初始化,并通过概率操作实现任务泛化。然而,目前在文献中没有原则性的指导来确定教师应该提供多少演示,以及什么是促进泛化的“好”演示。在本文中,我们提出了一种主动学习方法来学习能够在给定空间上进行任务泛化的promp库。我们利用不确定性采样技术来生成一个教师应该提供演示的任务实例。如果可能的话,将提供的演示合并到现有的ProMP中,或者如果确定演示与现有演示相差太大,则从演示创建新的ProMP。我们对常见的主动学习指标进行了定性比较;基于这种比较,我们提出了一种新的不确定性采样方法,命名为“最大马氏距离”。我们在一个真实的KUKA机器人上进行了抓取实验,结果表明,我们的主动学习方法比在空间上随机抽样的演示次数更少,实现了更好的任务泛化。
{"title":"Active Learning of Probabilistic Movement Primitives","authors":"Adam Conkey, Tucker Hermans","doi":"10.1109/Humanoids43949.2019.9035026","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035026","url":null,"abstract":"A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a “good” demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration. The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named “Greatest Mahalanobis Distance.” We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125589789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-10DOI: 10.1109/Humanoids43949.2019.9035023
Junhyeok Ahn, Donghyun Kim, S. Bang, N. Paine, L. Sentis
This paper describes the control, and evaluation of a new human-scaled biped robot with liquid cooled viscoelastic actuators (VLCA). Based on the lessons learned from previous work from our team on VLCA, we present a new system design embodying a Reaction Force Sensing Series Elastic Actuator and a Force Sensing Series Elastic Actuator. These designs are aimed at reducing the size and weight of the robot's actuation system while inheriting the advantages of our designs such as energy efficiency, torque density, impact resistance and position/force controllability. The robot design takes into consideration human-inspired kinematics and range-of-motion, while relying on foot placement to balance. In terms of actuator control, we perform a stability analysis on a Disturbance Observer designed for force control. We then evaluate various position control algorithms both in the time and frequency domains for our VLCA actuators. Having the low level baseline established, we first perform a controller evaluation on the legs using Operational Space Control. Finally, we move on to evaluating the full bipedal robot by accomplishing unsupported dynamic walking.
{"title":"Control of a High Performance Bipedal Robot using Viscoelastic Liquid Cooled Actuators","authors":"Junhyeok Ahn, Donghyun Kim, S. Bang, N. Paine, L. Sentis","doi":"10.1109/Humanoids43949.2019.9035023","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035023","url":null,"abstract":"This paper describes the control, and evaluation of a new human-scaled biped robot with liquid cooled viscoelastic actuators (VLCA). Based on the lessons learned from previous work from our team on VLCA, we present a new system design embodying a Reaction Force Sensing Series Elastic Actuator and a Force Sensing Series Elastic Actuator. These designs are aimed at reducing the size and weight of the robot's actuation system while inheriting the advantages of our designs such as energy efficiency, torque density, impact resistance and position/force controllability. The robot design takes into consideration human-inspired kinematics and range-of-motion, while relying on foot placement to balance. In terms of actuator control, we perform a stability analysis on a Disturbance Observer designed for force control. We then evaluate various position control algorithms both in the time and frequency domains for our VLCA actuators. Having the low level baseline established, we first perform a controller evaluation on the legs using Operational Space Control. Finally, we move on to evaluating the full bipedal robot by accomplishing unsupported dynamic walking.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114673786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-11-07DOI: 10.1109/Humanoids43949.2019.9035013
Adam Conkey, Tucker Hermans
We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from demonstrated kinematic motion, such as frictional forces between the end-effector and contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.
{"title":"Learning Task Constraints from Demonstration for Hybrid Force/Position Control","authors":"Adam Conkey, Tucker Hermans","doi":"10.1109/Humanoids43949.2019.9035013","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035013","url":null,"abstract":"We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from demonstrated kinematic motion, such as frictional forces between the end-effector and contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131165062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-13DOI: 10.1109/Humanoids43949.2019.9035016
Jens Lundell, Francesco Verdoja, V. Kyrki
Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.
{"title":"Deep Network Uncertainty Maps for Indoor Navigation","authors":"Jens Lundell, Francesco Verdoja, V. Kyrki","doi":"10.1109/Humanoids43949.2019.9035016","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035016","url":null,"abstract":"Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124506894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}