首页 > 最新文献

2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)最新文献

英文 中文
Active Learning of Probabilistic Movement Primitives 概率运动原语的主动学习
Pub Date : 2019-06-29 DOI: 10.1109/Humanoids43949.2019.9035026
Adam Conkey, Tucker Hermans
A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a “good” demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration. The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named “Greatest Mahalanobis Distance.” We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.
概率运动原语(Probabilistic Movement Primitive, ProMP)定义了轨迹上的分布和相关的反馈策略。promp通常从人类演示中初始化,并通过概率操作实现任务泛化。然而,目前在文献中没有原则性的指导来确定教师应该提供多少演示,以及什么是促进泛化的“好”演示。在本文中,我们提出了一种主动学习方法来学习能够在给定空间上进行任务泛化的promp库。我们利用不确定性采样技术来生成一个教师应该提供演示的任务实例。如果可能的话,将提供的演示合并到现有的ProMP中,或者如果确定演示与现有演示相差太大,则从演示创建新的ProMP。我们对常见的主动学习指标进行了定性比较;基于这种比较,我们提出了一种新的不确定性采样方法,命名为“最大马氏距离”。我们在一个真实的KUKA机器人上进行了抓取实验,结果表明,我们的主动学习方法比在空间上随机抽样的演示次数更少,实现了更好的任务泛化。
{"title":"Active Learning of Probabilistic Movement Primitives","authors":"Adam Conkey, Tucker Hermans","doi":"10.1109/Humanoids43949.2019.9035026","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035026","url":null,"abstract":"A Probabilistic Movement Primitive (ProMP) defines a distribution over trajectories with an associated feedback policy. ProMPs are typically initialized from human demonstrations and achieve task generalization through probabilistic operations. However, there is currently no principled guidance in the literature to determine how many demonstrations a teacher should provide and what constitutes a “good” demonstration for promoting generalization. In this paper, we present an active learning approach to learning a library of ProMPs capable of task generalization over a given space. We utilize uncertainty sampling techniques to generate a task instance for which a teacher should provide a demonstration. The provided demonstration is incorporated into an existing ProMP if possible, or a new ProMP is created from the demonstration if it is determined that it is too dissimilar from existing demonstrations. We provide a qualitative comparison between common active learning metrics; motivated by this comparison we present a novel uncertainty sampling approach named “Greatest Mahalanobis Distance.” We perform grasping experiments on a real KUKA robot and show our novel active learning measure achieves better task generalization with fewer demonstrations than a random sampling over the space.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125589789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Control of a High Performance Bipedal Robot using Viscoelastic Liquid Cooled Actuators 基于粘弹性液冷作动器的高性能双足机器人控制
Pub Date : 2019-06-10 DOI: 10.1109/Humanoids43949.2019.9035023
Junhyeok Ahn, Donghyun Kim, S. Bang, N. Paine, L. Sentis
This paper describes the control, and evaluation of a new human-scaled biped robot with liquid cooled viscoelastic actuators (VLCA). Based on the lessons learned from previous work from our team on VLCA, we present a new system design embodying a Reaction Force Sensing Series Elastic Actuator and a Force Sensing Series Elastic Actuator. These designs are aimed at reducing the size and weight of the robot's actuation system while inheriting the advantages of our designs such as energy efficiency, torque density, impact resistance and position/force controllability. The robot design takes into consideration human-inspired kinematics and range-of-motion, while relying on foot placement to balance. In terms of actuator control, we perform a stability analysis on a Disturbance Observer designed for force control. We then evaluate various position control algorithms both in the time and frequency domains for our VLCA actuators. Having the low level baseline established, we first perform a controller evaluation on the legs using Operational Space Control. Finally, we move on to evaluating the full bipedal robot by accomplishing unsupported dynamic walking.
本文介绍了一种新型液冷粘弹性作动器(VLCA)人形双足机器人的控制与评价。基于我们团队之前在VLCA上的工作经验,我们提出了一种新的系统设计,包括反力传感系列弹性执行器和力传感系列弹性执行器。这些设计旨在减小机器人驱动系统的尺寸和重量,同时继承我们设计的优点,如能源效率,扭矩密度,抗冲击性和位置/力可控性。机器人的设计考虑了人类的运动学和运动范围,同时依靠脚的位置来平衡。在作动器控制方面,我们对用于力控制的扰动观测器进行了稳定性分析。然后,我们在时域和频域评估了VLCA执行器的各种位置控制算法。建立了低水平基线后,我们首先使用操作空间控制对腿进行控制器评估。最后,我们通过完成无支撑的动态行走来评估全双足机器人。
{"title":"Control of a High Performance Bipedal Robot using Viscoelastic Liquid Cooled Actuators","authors":"Junhyeok Ahn, Donghyun Kim, S. Bang, N. Paine, L. Sentis","doi":"10.1109/Humanoids43949.2019.9035023","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035023","url":null,"abstract":"This paper describes the control, and evaluation of a new human-scaled biped robot with liquid cooled viscoelastic actuators (VLCA). Based on the lessons learned from previous work from our team on VLCA, we present a new system design embodying a Reaction Force Sensing Series Elastic Actuator and a Force Sensing Series Elastic Actuator. These designs are aimed at reducing the size and weight of the robot's actuation system while inheriting the advantages of our designs such as energy efficiency, torque density, impact resistance and position/force controllability. The robot design takes into consideration human-inspired kinematics and range-of-motion, while relying on foot placement to balance. In terms of actuator control, we perform a stability analysis on a Disturbance Observer designed for force control. We then evaluate various position control algorithms both in the time and frequency domains for our VLCA actuators. Having the low level baseline established, we first perform a controller evaluation on the legs using Operational Space Control. Finally, we move on to evaluating the full bipedal robot by accomplishing unsupported dynamic walking.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114673786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Learning Task Constraints from Demonstration for Hybrid Force/Position Control 从力/位置混合控制演示中学习任务约束
Pub Date : 2018-11-07 DOI: 10.1109/Humanoids43949.2019.9035013
Adam Conkey, Tucker Hermans
We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from demonstrated kinematic motion, such as frictional forces between the end-effector and contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.
通过实例,提出了一种学习力/位置混合控制的新方法。我们使用笛卡尔动态运动原语学习与期望力方向对齐的动态约束框架。与使用固定约束框架的方法相比,我们的方法很容易适应随时间快速变化的任务约束。我们在任何给定时间只激活一个力控制自由度,确保运动始终可能与所需力的方向正交。由于我们利用演示的力来学习约束框架,因此我们能够补偿仅从演示的运动学运动中学习的方法无法检测到的力,例如末端执行器与接触面之间的摩擦力。我们还提出了对动态运动原语框架的新扩展,鼓励从自由空间运动到接触运动的鲁棒过渡,尽管环境不确定性。我们结合了力反馈和动态移动目标,以减少施加到环境中的力,并在实现力控制的同时保持稳定的接触。我们的方法具有低的接触冲击力和低的稳态跟踪误差。
{"title":"Learning Task Constraints from Demonstration for Hybrid Force/Position Control","authors":"Adam Conkey, Tucker Hermans","doi":"10.1109/Humanoids43949.2019.9035013","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035013","url":null,"abstract":"We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from demonstrated kinematic motion, such as frictional forces between the end-effector and contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131165062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Deep Network Uncertainty Maps for Indoor Navigation 用于室内导航的深度网络不确定性地图
Pub Date : 2018-09-13 DOI: 10.1109/Humanoids43949.2019.9035016
Jens Lundell, Francesco Verdoja, V. Kyrki
Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.
大多数室内移动机器人依靠2D激光扫描仪进行定位、绘图和导航。然而,这些传感器不能检测透明表面,也不能测量桌子等复杂物体的全部占用情况。深度神经网络最近被提出通过学习估计物体占用来克服这一限制。然而,这些估计值受到不确定性的影响,因此评估它们的置信度是这些措施对自主导航和测绘有用的一个重要问题。在这项工作中,我们从两个方面来处理这个问题。首先,我们讨论了深度模型中的不确定性估计,提出了一种基于全卷积神经网络的解决方案。所提出的架构不受不确定性遵循高斯模型的假设的限制,就像许多流行的深度模型不确定性估计解决方案一样,例如Monte-Carlo Dropout。我们目前的结果表明,障碍物距离的不确定性实际上是用拉普拉斯分布更好地建模。在此基础上,提出了一种基于深度神经网络不确定性模型构建地图的新方法。特别地,我们提出了一种算法来构建包含障碍物距离估计信息的地图,同时考虑到每个估计中的不确定性水平。我们展示了构建的地图如何通过规划避开高度不确定区域的轨迹来提高全球导航安全性,从而为室内环境中的移动机器人提供更高的自主权。
{"title":"Deep Network Uncertainty Maps for Indoor Navigation","authors":"Jens Lundell, Francesco Verdoja, V. Kyrki","doi":"10.1109/Humanoids43949.2019.9035016","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035016","url":null,"abstract":"Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124506894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1