首页 > 最新文献

2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)最新文献

英文 中文
Online Virtual Repellent Point Adaptation for Biped Walking using Iterative Learning Control 基于迭代学习控制的双足步行在线虚拟驱避点自适应
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555676
Shengzhi Wang, George Mesesan, Johannes Englsberger, Dongheui Lee, C. Ott
We propose an online learning framework to reduce the effect of model inaccuracies and improve the robustness of the Divergent Component of Motion (DCM)-based walking algorithm. This framework uses the iterative learning control (ILC) theory for learning an adjusted Virtual Repellent Point (VRP) reference trajectory based on the current VRP error. The learned VRP reference waypoints are saved in a memory butter and used in the subsequent walking iteration. Based on the availability of force-torque (FT) sensors, we propose two different implementations using different VRP error signals for learning: measurement-error-based and commanded-error-based framework. Both implementations reduce the average VRP errors and demonstrate improved walking robustness. The measurement-error-based framework has better reference trajectory tracking performance for the measured VRP.
我们提出了一个在线学习框架,以减少模型不准确性的影响,并提高基于运动发散分量(DCM)的步行算法的鲁棒性。该框架采用迭代学习控制(ILC)理论,根据当前VRP误差学习调整后的VRP参考轨迹。学习到的VRP参考路径点被保存在一个记忆库中,用于后续的行走迭代。基于力-扭矩(FT)传感器的可用性,我们提出了使用不同VRP误差信号进行学习的两种不同实现:基于测量误差和基于命令误差的框架。两种实现都减少了平均VRP误差,并证明了改进的步行鲁棒性。基于测量误差的框架对测量的VRP具有更好的参考轨迹跟踪性能。
{"title":"Online Virtual Repellent Point Adaptation for Biped Walking using Iterative Learning Control","authors":"Shengzhi Wang, George Mesesan, Johannes Englsberger, Dongheui Lee, C. Ott","doi":"10.1109/HUMANOIDS47582.2021.9555676","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555676","url":null,"abstract":"We propose an online learning framework to reduce the effect of model inaccuracies and improve the robustness of the Divergent Component of Motion (DCM)-based walking algorithm. This framework uses the iterative learning control (ILC) theory for learning an adjusted Virtual Repellent Point (VRP) reference trajectory based on the current VRP error. The learned VRP reference waypoints are saved in a memory butter and used in the subsequent walking iteration. Based on the availability of force-torque (FT) sensors, we propose two different implementations using different VRP error signals for learning: measurement-error-based and commanded-error-based framework. Both implementations reduce the average VRP errors and demonstrate improved walking robustness. The measurement-error-based framework has better reference trajectory tracking performance for the measured VRP.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128384339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A HZD-based Framework for the Real-time, Optimization-free Enforcement of Gait Feasibility Constraints 基于hzd的步态可行性约束实时、无优化执行框架
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555786
Pravin Dangol, Andrew Lessieur, Eric N. Sihite, A. Ramezani
Real-time constraint satisfaction for robots can be quite challenging due to the high computational complexity that arises when accounting for the system dynamics and environmental interactions, often requiring simplification in modelling that might not necessarily account for all performance criteria. We instead propose an optimization-free approach where reference trajectories are manipulated to satisfy constraints brought on by ground contact as well as those prescribed for states and inputs. Unintended changes to trajectories especially ones optimized to produce periodic gaits can adversely affect gait stability, however we will show our approach can still guarantee stability of a gait by employing the use of coaxial thrusters that are unique to our robot.
由于在考虑系统动力学和环境相互作用时出现的高计算复杂性,机器人的实时约束满足可能相当具有挑战性,通常需要简化建模,这可能不一定能考虑所有性能标准。相反,我们提出了一种无优化的方法,其中参考轨迹被操纵以满足地面接触带来的约束以及为状态和输入规定的约束。对轨迹的意外改变,尤其是那些为产生周期性步态而优化的轨迹变化,可能会对步态稳定性产生不利影响,然而,我们将展示我们的方法仍然可以通过使用我们的机器人独有的同轴推进器来保证步态的稳定性。
{"title":"A HZD-based Framework for the Real-time, Optimization-free Enforcement of Gait Feasibility Constraints","authors":"Pravin Dangol, Andrew Lessieur, Eric N. Sihite, A. Ramezani","doi":"10.1109/HUMANOIDS47582.2021.9555786","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555786","url":null,"abstract":"Real-time constraint satisfaction for robots can be quite challenging due to the high computational complexity that arises when accounting for the system dynamics and environmental interactions, often requiring simplification in modelling that might not necessarily account for all performance criteria. We instead propose an optimization-free approach where reference trajectories are manipulated to satisfy constraints brought on by ground contact as well as those prescribed for states and inputs. Unintended changes to trajectories especially ones optimized to produce periodic gaits can adversely affect gait stability, however we will show our approach can still guarantee stability of a gait by employing the use of coaxial thrusters that are unique to our robot.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Walking-in-Place Foot Interface for Locomotion Control and Telepresence of Humanoid Robots 人形机器人运动控制与临场感的原地行走足部接口
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555768
Ata Otaran, I. Farkhatdinov
We present a foot-tapping walking-in-place type locomotion interface and algorithm to generate high-level movement patterns for humanoid remotely controlled robots. Foot tapping motions acting on the platform are used as movement commands to remotely control locomotion of a humanoid robot. We describe two separate motion mapping algorithms suitable for wheeled and bipedal humanoid locomotion. Our interface enables remote locomotion control of humanoid robots with the help of a seated and hands-free interface and enables the use of both handheld or desktop-based interfaces for manipulation tasks. An experimental study with eight participants controlling walking speed of a virtual robot was conducted to explore if the participants could maintain distance (1-3m range) to a reference target (leading robot) moving at different speeds. All participants were able to use the proposed interface to track the leading robot efficiently for walking speeds of less than 1 m/s and an average tracking error was 0.47 m. We discuss the results of the study along with the NASA TLX and system usability surveys.
提出了一种用于类人遥控机器人生成高级运动模式的轻拍原地行走式运动接口和算法。利用作用在平台上的足部轻拍动作作为运动指令,远程控制仿人机器人的运动。我们描述了两种不同的运动映射算法适用于轮式和双足人形运动。我们的界面可以通过坐姿和免提接口实现人形机器人的远程运动控制,并可以使用手持或基于桌面的界面进行操作任务。通过8名被试控制虚拟机器人的行走速度,探讨被试是否能与以不同速度移动的参考目标(领头机器人)保持1 ~ 3m的距离。所有参与者都能够使用所提出的接口有效地跟踪领先机器人,行走速度小于1 m/s,平均跟踪误差为0.47 m。我们与NASA TLX和系统可用性调查一起讨论了研究结果。
{"title":"Walking-in-Place Foot Interface for Locomotion Control and Telepresence of Humanoid Robots","authors":"Ata Otaran, I. Farkhatdinov","doi":"10.1109/HUMANOIDS47582.2021.9555768","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555768","url":null,"abstract":"We present a foot-tapping walking-in-place type locomotion interface and algorithm to generate high-level movement patterns for humanoid remotely controlled robots. Foot tapping motions acting on the platform are used as movement commands to remotely control locomotion of a humanoid robot. We describe two separate motion mapping algorithms suitable for wheeled and bipedal humanoid locomotion. Our interface enables remote locomotion control of humanoid robots with the help of a seated and hands-free interface and enables the use of both handheld or desktop-based interfaces for manipulation tasks. An experimental study with eight participants controlling walking speed of a virtual robot was conducted to explore if the participants could maintain distance (1-3m range) to a reference target (leading robot) moving at different speeds. All participants were able to use the proposed interface to track the leading robot efficiently for walking speeds of less than 1 m/s and an average tracking error was 0.47 m. We discuss the results of the study along with the NASA TLX and system usability surveys.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132558712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Safe Data-Driven Contact-Rich Manipulation 安全数据驱动的接触丰富的操作
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555680
Ioanna Mitsioni, Pouria Tajvar, D. Kragic, Jana Tumova, Christian Pek
In this paper, we address the safety of data-driven control for contact-rich manipulation. We propose to restrict the controller’s action space to keep the system in a set of safe states. In the absence of an analytical model, we show how Gaussian Processes (GP) can be used to approximate safe sets. We disable inputs for which the predicted states are likely to be unsafe using the GP. Furthermore, we show how locally designed feedback controllers can be used to improve the execution precision in the presence of modelling errors. We demonstrate the benefits of our method on a pushing task with a variety of dynamics, by using known and unknown surfaces and different object loads. Our results illustrate that the proposed approach significantly improves the performance and safety of the baseline controller.
在本文中,我们讨论了数据驱动控制在富接触操作中的安全性。我们建议限制控制器的动作空间,以保持系统处于一组安全状态。在没有解析模型的情况下,我们展示了如何使用高斯过程(GP)来近似安全集。我们禁用使用GP的预测状态可能不安全的输入。此外,我们展示了如何在存在建模误差的情况下使用局部设计的反馈控制器来提高执行精度。我们通过使用已知和未知的表面以及不同的物体负载,证明了我们的方法在具有各种动力学的推动任务上的好处。结果表明,该方法显著提高了基准控制器的性能和安全性。
{"title":"Safe Data-Driven Contact-Rich Manipulation","authors":"Ioanna Mitsioni, Pouria Tajvar, D. Kragic, Jana Tumova, Christian Pek","doi":"10.1109/HUMANOIDS47582.2021.9555680","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555680","url":null,"abstract":"In this paper, we address the safety of data-driven control for contact-rich manipulation. We propose to restrict the controller’s action space to keep the system in a set of safe states. In the absence of an analytical model, we show how Gaussian Processes (GP) can be used to approximate safe sets. We disable inputs for which the predicted states are likely to be unsafe using the GP. Furthermore, we show how locally designed feedback controllers can be used to improve the execution precision in the presence of modelling errors. We demonstrate the benefits of our method on a pushing task with a variety of dynamics, by using known and unknown surfaces and different object loads. Our results illustrate that the proposed approach significantly improves the performance and safety of the baseline controller.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115228060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Elliott and Connolly Benchmark: A Test for Evaluating the In-Hand Dexterity of Robot Hands 埃利奥特和康诺利基准:一种评估机器人手部灵巧性的测试
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555798
Ryan Coulson, C. Li, C. Majidi, N. Pollard
Achieving dexterous in-hand manipulation with robot hands is an extremely challenging problem, in part due to current limitations in hardware design. One notable bottleneck hampering the development of improved hardware for dexterous manipulation is the lack of a standardized benchmark for evaluating in-hand dexterity. In order to address this issue, we establish a new benchmark for evaluating in-hand dexterity, specifically for humanoid type robot hands: the Elliott and Connolly Benchmark. This benchmark is based on a classification of human manipulations established by Elliott and Connolly, and consists of 13 distinct in-hand manipulation patterns. We define qualitative and quantitative metrics for evaluation of the benchmark, and provide a detailed testing protocol. Additionally, we introduce a dexterous robot hand - the CMU Foam Hand III - which is evaluated using the benchmark, successfully completing 10 of the 13 manipulation patterns and outperforming human hand baseline results for several of the patterns.
用机器人手实现灵巧的手操作是一个极具挑战性的问题,部分原因是目前硬件设计的限制。一个显著的瓶颈阻碍了改进灵巧操作硬件的发展是缺乏一个标准的基准来评估在手灵巧性。为了解决这个问题,我们建立了一个新的评估手部灵巧性的基准,特别是针对人形机器人的手:Elliott和Connolly基准。该基准基于Elliott和Connolly建立的人类操作分类,由13种不同的手动操作模式组成。我们定义了用于评估基准的定性和定量度量,并提供了详细的测试方案。此外,我们介绍了一个灵巧的机器人手- CMU泡沫手III -使用基准进行评估,成功完成了13个操作模式中的10个,并且在几个模式中表现优于人手基线结果。
{"title":"The Elliott and Connolly Benchmark: A Test for Evaluating the In-Hand Dexterity of Robot Hands","authors":"Ryan Coulson, C. Li, C. Majidi, N. Pollard","doi":"10.1109/HUMANOIDS47582.2021.9555798","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555798","url":null,"abstract":"Achieving dexterous in-hand manipulation with robot hands is an extremely challenging problem, in part due to current limitations in hardware design. One notable bottleneck hampering the development of improved hardware for dexterous manipulation is the lack of a standardized benchmark for evaluating in-hand dexterity. In order to address this issue, we establish a new benchmark for evaluating in-hand dexterity, specifically for humanoid type robot hands: the Elliott and Connolly Benchmark. This benchmark is based on a classification of human manipulations established by Elliott and Connolly, and consists of 13 distinct in-hand manipulation patterns. We define qualitative and quantitative metrics for evaluation of the benchmark, and provide a detailed testing protocol. Additionally, we introduce a dexterous robot hand - the CMU Foam Hand III - which is evaluated using the benchmark, successfully completing 10 of the 13 manipulation patterns and outperforming human hand baseline results for several of the patterns.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131881788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Guided Robot Skill Learning: A User-Study on Learning Probabilistic Movement Primitives with Non-Experts 引导机器人技能学习:非专家学习概率运动原语的用户研究
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555785
Moritz Knaust, Dorothea Koert
Intelligent robots can potentially assist humans in everyday life and industrial production processes. However, the variety of different tasks for such robots renders pure preprogramming infeasible, and learning new tasks directly from non-expert users becomes desirable. Hereby, imitation learning and the concept of movement primitives are promising and widely used approaches. In particular, Probabilistic Movement Primitives (ProMPs) provide a representation that can capture and exploit the variance in human demonstrations. While ProMPs have already been applied for different robotic tasks, an evaluation of how non-expert users can actually teach full tasks based on ProMPs is missing in the literature. We present a framework for Guided Robot Skill Learning which enables inexperienced users to teach a robot combinations of ProMPs and basic robot motions such as gripper commands or Point-to-Point movements. The proposed approach represents the learned skills in the form of sequential Behavior Trees, which can be easily incorporated into more complex robotic behaviors. In a pilot user study with 10 participants, we investigate on two robotic tasks how inexperienced users train ProMP based skills and how they use the concept of modular skill creation. The experimental results show that ProMPs enable more successful task execution compared to teaching Point-to-Point motions. Additionally, our evaluation reveals specific problems that are relevant to consider in future ProMP based teaching systems for non-expert users such as multimodality and missing variance in the demonstrations.
智能机器人可以在日常生活和工业生产过程中帮助人类。然而,对于这样的机器人来说,各种不同的任务使得纯粹的预编程变得不可行,直接从非专业用户那里学习新任务变得可取。因此,模仿学习和运动原语的概念是有前途的和广泛使用的方法。特别是,概率运动原语(Probabilistic Movement Primitives, promp)提供了一种表示,可以捕获和利用人类演示中的差异。虽然promp已经应用于不同的机器人任务,但在文献中缺乏对非专业用户如何基于promp实际教授完整任务的评估。我们提出了一个指导机器人技能学习的框架,使没有经验的用户能够教机器人结合promp和基本的机器人运动,如抓手命令或点对点运动。提出的方法以顺序行为树的形式表示学习的技能,可以很容易地合并到更复杂的机器人行为中。在一项有10名参与者的试点用户研究中,我们调查了两个机器人任务中缺乏经验的用户如何训练基于ProMP的技能以及他们如何使用模块化技能创建的概念。实验结果表明,与点对点运动教学相比,promp能够更成功地执行任务。此外,我们的评估揭示了未来针对非专业用户的基于ProMP的教学系统中需要考虑的具体问题,例如演示中的多模态和缺失方差。
{"title":"Guided Robot Skill Learning: A User-Study on Learning Probabilistic Movement Primitives with Non-Experts","authors":"Moritz Knaust, Dorothea Koert","doi":"10.1109/HUMANOIDS47582.2021.9555785","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555785","url":null,"abstract":"Intelligent robots can potentially assist humans in everyday life and industrial production processes. However, the variety of different tasks for such robots renders pure preprogramming infeasible, and learning new tasks directly from non-expert users becomes desirable. Hereby, imitation learning and the concept of movement primitives are promising and widely used approaches. In particular, Probabilistic Movement Primitives (ProMPs) provide a representation that can capture and exploit the variance in human demonstrations. While ProMPs have already been applied for different robotic tasks, an evaluation of how non-expert users can actually teach full tasks based on ProMPs is missing in the literature. We present a framework for Guided Robot Skill Learning which enables inexperienced users to teach a robot combinations of ProMPs and basic robot motions such as gripper commands or Point-to-Point movements. The proposed approach represents the learned skills in the form of sequential Behavior Trees, which can be easily incorporated into more complex robotic behaviors. In a pilot user study with 10 participants, we investigate on two robotic tasks how inexperienced users train ProMP based skills and how they use the concept of modular skill creation. The experimental results show that ProMPs enable more successful task execution compared to teaching Point-to-Point motions. Additionally, our evaluation reveals specific problems that are relevant to consider in future ProMP based teaching systems for non-expert users such as multimodality and missing variance in the demonstrations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Online Learning of Danger Avoidance for Complex Structures of Musculoskeletal Humanoids and Its Applications 肌肉骨骼类人复杂结构的危险回避在线学习及其应用
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555792
Kento Kawaharazuka, Naoki Hiraoka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba
The complex structure of musculoskeletal humanoids makes it difficult to model them, and the inter-body interference and high internal muscle force are unavoidable. Although various safety mechanisms have been developed to solve this problem, it is important not only to deal with the dangers when they occur but also to prevent them from happening. In this study, we propose a method to learn a network outputting danger probability corresponding to the muscle length online so that the robot can gradually prevent dangers from occurring. Applications of this network for control are also described. The method is applied to the musculoskeletal humanoid, Musashi, and its effectiveness is verified.
肌肉骨骼类人机器人结构复杂,建模难度大,且不可避免地存在体间干扰和较高的肌肉内力。尽管已经开发了各种安全机制来解决这一问题,但重要的是不仅要在危险发生时处理它们,而且要防止它们发生。在本研究中,我们提出了一种在线学习网络输出与肌肉长度对应的危险概率的方法,使机器人能够逐步预防危险的发生。文中还介绍了该网络在控制中的应用。将该方法应用于具有肌肉骨骼的人形机器人武藏,验证了其有效性。
{"title":"Online Learning of Danger Avoidance for Complex Structures of Musculoskeletal Humanoids and Its Applications","authors":"Kento Kawaharazuka, Naoki Hiraoka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba","doi":"10.1109/HUMANOIDS47582.2021.9555792","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555792","url":null,"abstract":"The complex structure of musculoskeletal humanoids makes it difficult to model them, and the inter-body interference and high internal muscle force are unavoidable. Although various safety mechanisms have been developed to solve this problem, it is important not only to deal with the dangers when they occur but also to prevent them from happening. In this study, we propose a method to learn a network outputting danger probability corresponding to the muscle length online so that the robot can gradually prevent dangers from occurring. Applications of this network for control are also described. The method is applied to the musculoskeletal humanoid, Musashi, and its effectiveness is verified.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Android Printing: Towards On-Demand Android Development Employing Multi-Material 3-D Printer Android打印:使用多材料3d打印机实现按需Android开发
Pub Date : 2021-07-19 DOI: 10.36227/techrxiv.15034623
S. Yagi, Yoshihiro Nakata, H. Ishiguro
In this paper, we propose the concept of Android Printing, which is printing a full android, including skin and mechanical components in a single run using a multi-material 3D printer. Printing an android all at once both reduces assembly time and enables intricate designs with a high degrees of freedom. To prove this concept, we tested by actual printing an android. First, we printed the skin with multiple annular ridges to test skin deformation. By pulling the skin, we show that the state of deformation of the skin can be adjusted depending on the ridge structure. This result is essential in designing humanlike skin deformations. After that, we designed and fabricated a 3-D printed android head with 31 degrees of freedom. The skin and linkage mechanism were printed together before connecting them to a unit combining several electric motors. To confirm our concept’s feasibility, we created several motions with the android based on human facial movement data. In the future, android printing might enable people to use an android as their own avatar.
在本文中,我们提出了Android Printing的概念,即使用多材料3D打印机一次打印一个完整的机器人,包括皮肤和机械部件。一次打印一个机器人既减少了组装时间,又使复杂的设计具有高度的自由度。为了证明这一概念,我们通过实际打印一个机器人进行了测试。首先,我们用多个环形脊来打印皮肤,以测试皮肤的变形。通过拉蒙皮,我们证明了蒙皮的变形状态可以根据脊结构进行调整。这一结果对于设计类似人类的皮肤变形是必不可少的。在此之后,我们设计并制作了一个具有31个自由度的3d打印机器人头部。蒙皮和连杆机构被打印在一起,然后将它们连接到一个由几个电动机组成的单元。为了证实我们概念的可行性,我们根据人类面部运动数据,用机器人创造了几个动作。在未来,机器人打印可能会让人们使用机器人作为自己的化身。
{"title":"Android Printing: Towards On-Demand Android Development Employing Multi-Material 3-D Printer","authors":"S. Yagi, Yoshihiro Nakata, H. Ishiguro","doi":"10.36227/techrxiv.15034623","DOIUrl":"https://doi.org/10.36227/techrxiv.15034623","url":null,"abstract":"In this paper, we propose the concept of Android Printing, which is printing a full android, including skin and mechanical components in a single run using a multi-material 3D printer. Printing an android all at once both reduces assembly time and enables intricate designs with a high degrees of freedom. To prove this concept, we tested by actual printing an android. First, we printed the skin with multiple annular ridges to test skin deformation. By pulling the skin, we show that the state of deformation of the skin can be adjusted depending on the ridge structure. This result is essential in designing humanlike skin deformations. After that, we designed and fabricated a 3-D printed android head with 31 degrees of freedom. The skin and linkage mechanism were printed together before connecting them to a unit combining several electric motors. To confirm our concept’s feasibility, we created several motions with the android based on human facial movement data. In the future, android printing might enable people to use an android as their own avatar.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114478682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Scene Manipulation Based on 3D Spatial Object Relations and Language Instructions 基于三维空间对象关系和语言指令的语义场景处理
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555802
Rainer Kartmann, Danqing Liu, T. Asfour
Robot understanding of spatial object relations is key for a symbiotic human-robot interaction. Understanding the meaning of such relations between objects in a current scene and target relations specified in natural language commands is essential for the generation of robot manipulation action goals to change the scene by relocating objects relative to each other to fulfill the desired spatial relations. This ability requires a representation of spatial relations, which maps spatial relation symbols extracted from language instructions to subsymbolic object goal locations in the world. We present a generative model of static and dynamic 3D spatial relations between multiple reference objects. The model is based on a parametric probability distribution defined in cylindrical coordinates and is learned from examples provided by humans manipulating a scene in the real world. We demonstrate the ability of our representation to generate suitable object goal positions for a pick-and-place task on a humanoid robot, where object relations specified in natural language commands are extracted, object goal positions are determined and used for parametrizing the actions needed to transfer a given scene into a new one that fulfills the specified relations.
机器人对空间对象关系的理解是实现人机共生交互的关键。理解当前场景中物体之间的这种关系和自然语言命令中指定的目标关系的含义,对于生成机器人操作动作目标,通过相对地重新定位物体来改变场景,以实现所需的空间关系至关重要。这种能力需要空间关系的表示,它将从语言指令中提取的空间关系符号映射到世界上的次符号对象目标位置。我们提出了一个多参考对象之间静态和动态三维空间关系的生成模型。该模型基于在柱坐标中定义的参数概率分布,并从人类在现实世界中操纵场景提供的示例中学习。我们展示了我们的表征为人形机器人的拾取任务生成合适的对象目标位置的能力,其中提取自然语言命令中指定的对象关系,确定对象目标位置并用于参数化将给定场景转换为满足指定关系的新场景所需的动作。
{"title":"Semantic Scene Manipulation Based on 3D Spatial Object Relations and Language Instructions","authors":"Rainer Kartmann, Danqing Liu, T. Asfour","doi":"10.1109/HUMANOIDS47582.2021.9555802","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555802","url":null,"abstract":"Robot understanding of spatial object relations is key for a symbiotic human-robot interaction. Understanding the meaning of such relations between objects in a current scene and target relations specified in natural language commands is essential for the generation of robot manipulation action goals to change the scene by relocating objects relative to each other to fulfill the desired spatial relations. This ability requires a representation of spatial relations, which maps spatial relation symbols extracted from language instructions to subsymbolic object goal locations in the world. We present a generative model of static and dynamic 3D spatial relations between multiple reference objects. The model is based on a parametric probability distribution defined in cylindrical coordinates and is learned from examples provided by humans manipulating a scene in the real world. We demonstrate the ability of our representation to generate suitable object goal positions for a pick-and-place task on a humanoid robot, where object relations specified in natural language commands are extracted, object goal positions are determined and used for parametrizing the actions needed to transfer a given scene into a new one that fulfills the specified relations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117151887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multisensorial robot calibration framework and toolbox 多传感器机器人标定框架与工具箱
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555803
Jakub Rozlivek, Lukas Rustler, K. Štěpánová, M. Hoffmann
The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).
机器人模型的准确性直接影响其性能。随着协作、社交或软机器人的出现,材料的刚度和制造零件的精度下降,CAD模型为模型提供了不太准确的基础。另一方面,这些机器通常配备了一套功能强大但价格低廉的传感器,这为机器人自主重复执行的自包含校准方法提供了可能性。在这项工作中,我们通过结合新的感官模式(例如,机器人上的摄像头,全身触觉传感器),校准类型及其组合,扩展了处理机器人运动学校准的理论。我们提供了一个统一的公式,可以将传统方法(外部激光跟踪器,与外部环境接触的约束)与类人机器人可用的自包含校准(自我观察,自我接触)结合在一个单一框架和单一成本函数中。其次,我们为Matlab提供了一个提供此功能的开源工具箱,以及用于预处理(例如,数据集可视化)和评估(例如,可观察性/可识别性)的额外工具。我们通过校准两个人形机器人(iCub, Nao)和一个工业机械手(双臂设置与Yaskawa-Motoman MA1400)来说明该工具的一些可能性。
{"title":"Multisensorial robot calibration framework and toolbox","authors":"Jakub Rozlivek, Lukas Rustler, K. Štěpánová, M. Hoffmann","doi":"10.1109/HUMANOIDS47582.2021.9555803","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555803","url":null,"abstract":"The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122339019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1