Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555676
Shengzhi Wang, George Mesesan, Johannes Englsberger, Dongheui Lee, C. Ott
We propose an online learning framework to reduce the effect of model inaccuracies and improve the robustness of the Divergent Component of Motion (DCM)-based walking algorithm. This framework uses the iterative learning control (ILC) theory for learning an adjusted Virtual Repellent Point (VRP) reference trajectory based on the current VRP error. The learned VRP reference waypoints are saved in a memory butter and used in the subsequent walking iteration. Based on the availability of force-torque (FT) sensors, we propose two different implementations using different VRP error signals for learning: measurement-error-based and commanded-error-based framework. Both implementations reduce the average VRP errors and demonstrate improved walking robustness. The measurement-error-based framework has better reference trajectory tracking performance for the measured VRP.
{"title":"Online Virtual Repellent Point Adaptation for Biped Walking using Iterative Learning Control","authors":"Shengzhi Wang, George Mesesan, Johannes Englsberger, Dongheui Lee, C. Ott","doi":"10.1109/HUMANOIDS47582.2021.9555676","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555676","url":null,"abstract":"We propose an online learning framework to reduce the effect of model inaccuracies and improve the robustness of the Divergent Component of Motion (DCM)-based walking algorithm. This framework uses the iterative learning control (ILC) theory for learning an adjusted Virtual Repellent Point (VRP) reference trajectory based on the current VRP error. The learned VRP reference waypoints are saved in a memory butter and used in the subsequent walking iteration. Based on the availability of force-torque (FT) sensors, we propose two different implementations using different VRP error signals for learning: measurement-error-based and commanded-error-based framework. Both implementations reduce the average VRP errors and demonstrate improved walking robustness. The measurement-error-based framework has better reference trajectory tracking performance for the measured VRP.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128384339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555786
Pravin Dangol, Andrew Lessieur, Eric N. Sihite, A. Ramezani
Real-time constraint satisfaction for robots can be quite challenging due to the high computational complexity that arises when accounting for the system dynamics and environmental interactions, often requiring simplification in modelling that might not necessarily account for all performance criteria. We instead propose an optimization-free approach where reference trajectories are manipulated to satisfy constraints brought on by ground contact as well as those prescribed for states and inputs. Unintended changes to trajectories especially ones optimized to produce periodic gaits can adversely affect gait stability, however we will show our approach can still guarantee stability of a gait by employing the use of coaxial thrusters that are unique to our robot.
{"title":"A HZD-based Framework for the Real-time, Optimization-free Enforcement of Gait Feasibility Constraints","authors":"Pravin Dangol, Andrew Lessieur, Eric N. Sihite, A. Ramezani","doi":"10.1109/HUMANOIDS47582.2021.9555786","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555786","url":null,"abstract":"Real-time constraint satisfaction for robots can be quite challenging due to the high computational complexity that arises when accounting for the system dynamics and environmental interactions, often requiring simplification in modelling that might not necessarily account for all performance criteria. We instead propose an optimization-free approach where reference trajectories are manipulated to satisfy constraints brought on by ground contact as well as those prescribed for states and inputs. Unintended changes to trajectories especially ones optimized to produce periodic gaits can adversely affect gait stability, however we will show our approach can still guarantee stability of a gait by employing the use of coaxial thrusters that are unique to our robot.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131517076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555768
Ata Otaran, I. Farkhatdinov
We present a foot-tapping walking-in-place type locomotion interface and algorithm to generate high-level movement patterns for humanoid remotely controlled robots. Foot tapping motions acting on the platform are used as movement commands to remotely control locomotion of a humanoid robot. We describe two separate motion mapping algorithms suitable for wheeled and bipedal humanoid locomotion. Our interface enables remote locomotion control of humanoid robots with the help of a seated and hands-free interface and enables the use of both handheld or desktop-based interfaces for manipulation tasks. An experimental study with eight participants controlling walking speed of a virtual robot was conducted to explore if the participants could maintain distance (1-3m range) to a reference target (leading robot) moving at different speeds. All participants were able to use the proposed interface to track the leading robot efficiently for walking speeds of less than 1 m/s and an average tracking error was 0.47 m. We discuss the results of the study along with the NASA TLX and system usability surveys.
{"title":"Walking-in-Place Foot Interface for Locomotion Control and Telepresence of Humanoid Robots","authors":"Ata Otaran, I. Farkhatdinov","doi":"10.1109/HUMANOIDS47582.2021.9555768","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555768","url":null,"abstract":"We present a foot-tapping walking-in-place type locomotion interface and algorithm to generate high-level movement patterns for humanoid remotely controlled robots. Foot tapping motions acting on the platform are used as movement commands to remotely control locomotion of a humanoid robot. We describe two separate motion mapping algorithms suitable for wheeled and bipedal humanoid locomotion. Our interface enables remote locomotion control of humanoid robots with the help of a seated and hands-free interface and enables the use of both handheld or desktop-based interfaces for manipulation tasks. An experimental study with eight participants controlling walking speed of a virtual robot was conducted to explore if the participants could maintain distance (1-3m range) to a reference target (leading robot) moving at different speeds. All participants were able to use the proposed interface to track the leading robot efficiently for walking speeds of less than 1 m/s and an average tracking error was 0.47 m. We discuss the results of the study along with the NASA TLX and system usability surveys.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132558712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555680
Ioanna Mitsioni, Pouria Tajvar, D. Kragic, Jana Tumova, Christian Pek
In this paper, we address the safety of data-driven control for contact-rich manipulation. We propose to restrict the controller’s action space to keep the system in a set of safe states. In the absence of an analytical model, we show how Gaussian Processes (GP) can be used to approximate safe sets. We disable inputs for which the predicted states are likely to be unsafe using the GP. Furthermore, we show how locally designed feedback controllers can be used to improve the execution precision in the presence of modelling errors. We demonstrate the benefits of our method on a pushing task with a variety of dynamics, by using known and unknown surfaces and different object loads. Our results illustrate that the proposed approach significantly improves the performance and safety of the baseline controller.
{"title":"Safe Data-Driven Contact-Rich Manipulation","authors":"Ioanna Mitsioni, Pouria Tajvar, D. Kragic, Jana Tumova, Christian Pek","doi":"10.1109/HUMANOIDS47582.2021.9555680","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555680","url":null,"abstract":"In this paper, we address the safety of data-driven control for contact-rich manipulation. We propose to restrict the controller’s action space to keep the system in a set of safe states. In the absence of an analytical model, we show how Gaussian Processes (GP) can be used to approximate safe sets. We disable inputs for which the predicted states are likely to be unsafe using the GP. Furthermore, we show how locally designed feedback controllers can be used to improve the execution precision in the presence of modelling errors. We demonstrate the benefits of our method on a pushing task with a variety of dynamics, by using known and unknown surfaces and different object loads. Our results illustrate that the proposed approach significantly improves the performance and safety of the baseline controller.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115228060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555798
Ryan Coulson, C. Li, C. Majidi, N. Pollard
Achieving dexterous in-hand manipulation with robot hands is an extremely challenging problem, in part due to current limitations in hardware design. One notable bottleneck hampering the development of improved hardware for dexterous manipulation is the lack of a standardized benchmark for evaluating in-hand dexterity. In order to address this issue, we establish a new benchmark for evaluating in-hand dexterity, specifically for humanoid type robot hands: the Elliott and Connolly Benchmark. This benchmark is based on a classification of human manipulations established by Elliott and Connolly, and consists of 13 distinct in-hand manipulation patterns. We define qualitative and quantitative metrics for evaluation of the benchmark, and provide a detailed testing protocol. Additionally, we introduce a dexterous robot hand - the CMU Foam Hand III - which is evaluated using the benchmark, successfully completing 10 of the 13 manipulation patterns and outperforming human hand baseline results for several of the patterns.
{"title":"The Elliott and Connolly Benchmark: A Test for Evaluating the In-Hand Dexterity of Robot Hands","authors":"Ryan Coulson, C. Li, C. Majidi, N. Pollard","doi":"10.1109/HUMANOIDS47582.2021.9555798","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555798","url":null,"abstract":"Achieving dexterous in-hand manipulation with robot hands is an extremely challenging problem, in part due to current limitations in hardware design. One notable bottleneck hampering the development of improved hardware for dexterous manipulation is the lack of a standardized benchmark for evaluating in-hand dexterity. In order to address this issue, we establish a new benchmark for evaluating in-hand dexterity, specifically for humanoid type robot hands: the Elliott and Connolly Benchmark. This benchmark is based on a classification of human manipulations established by Elliott and Connolly, and consists of 13 distinct in-hand manipulation patterns. We define qualitative and quantitative metrics for evaluation of the benchmark, and provide a detailed testing protocol. Additionally, we introduce a dexterous robot hand - the CMU Foam Hand III - which is evaluated using the benchmark, successfully completing 10 of the 13 manipulation patterns and outperforming human hand baseline results for several of the patterns.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131881788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555785
Moritz Knaust, Dorothea Koert
Intelligent robots can potentially assist humans in everyday life and industrial production processes. However, the variety of different tasks for such robots renders pure preprogramming infeasible, and learning new tasks directly from non-expert users becomes desirable. Hereby, imitation learning and the concept of movement primitives are promising and widely used approaches. In particular, Probabilistic Movement Primitives (ProMPs) provide a representation that can capture and exploit the variance in human demonstrations. While ProMPs have already been applied for different robotic tasks, an evaluation of how non-expert users can actually teach full tasks based on ProMPs is missing in the literature. We present a framework for Guided Robot Skill Learning which enables inexperienced users to teach a robot combinations of ProMPs and basic robot motions such as gripper commands or Point-to-Point movements. The proposed approach represents the learned skills in the form of sequential Behavior Trees, which can be easily incorporated into more complex robotic behaviors. In a pilot user study with 10 participants, we investigate on two robotic tasks how inexperienced users train ProMP based skills and how they use the concept of modular skill creation. The experimental results show that ProMPs enable more successful task execution compared to teaching Point-to-Point motions. Additionally, our evaluation reveals specific problems that are relevant to consider in future ProMP based teaching systems for non-expert users such as multimodality and missing variance in the demonstrations.
智能机器人可以在日常生活和工业生产过程中帮助人类。然而,对于这样的机器人来说,各种不同的任务使得纯粹的预编程变得不可行,直接从非专业用户那里学习新任务变得可取。因此,模仿学习和运动原语的概念是有前途的和广泛使用的方法。特别是,概率运动原语(Probabilistic Movement Primitives, promp)提供了一种表示,可以捕获和利用人类演示中的差异。虽然promp已经应用于不同的机器人任务,但在文献中缺乏对非专业用户如何基于promp实际教授完整任务的评估。我们提出了一个指导机器人技能学习的框架,使没有经验的用户能够教机器人结合promp和基本的机器人运动,如抓手命令或点对点运动。提出的方法以顺序行为树的形式表示学习的技能,可以很容易地合并到更复杂的机器人行为中。在一项有10名参与者的试点用户研究中,我们调查了两个机器人任务中缺乏经验的用户如何训练基于ProMP的技能以及他们如何使用模块化技能创建的概念。实验结果表明,与点对点运动教学相比,promp能够更成功地执行任务。此外,我们的评估揭示了未来针对非专业用户的基于ProMP的教学系统中需要考虑的具体问题,例如演示中的多模态和缺失方差。
{"title":"Guided Robot Skill Learning: A User-Study on Learning Probabilistic Movement Primitives with Non-Experts","authors":"Moritz Knaust, Dorothea Koert","doi":"10.1109/HUMANOIDS47582.2021.9555785","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555785","url":null,"abstract":"Intelligent robots can potentially assist humans in everyday life and industrial production processes. However, the variety of different tasks for such robots renders pure preprogramming infeasible, and learning new tasks directly from non-expert users becomes desirable. Hereby, imitation learning and the concept of movement primitives are promising and widely used approaches. In particular, Probabilistic Movement Primitives (ProMPs) provide a representation that can capture and exploit the variance in human demonstrations. While ProMPs have already been applied for different robotic tasks, an evaluation of how non-expert users can actually teach full tasks based on ProMPs is missing in the literature. We present a framework for Guided Robot Skill Learning which enables inexperienced users to teach a robot combinations of ProMPs and basic robot motions such as gripper commands or Point-to-Point movements. The proposed approach represents the learned skills in the form of sequential Behavior Trees, which can be easily incorporated into more complex robotic behaviors. In a pilot user study with 10 participants, we investigate on two robotic tasks how inexperienced users train ProMP based skills and how they use the concept of modular skill creation. The experimental results show that ProMPs enable more successful task execution compared to teaching Point-to-Point motions. Additionally, our evaluation reveals specific problems that are relevant to consider in future ProMP based teaching systems for non-expert users such as multimodality and missing variance in the demonstrations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555792
Kento Kawaharazuka, Naoki Hiraoka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba
The complex structure of musculoskeletal humanoids makes it difficult to model them, and the inter-body interference and high internal muscle force are unavoidable. Although various safety mechanisms have been developed to solve this problem, it is important not only to deal with the dangers when they occur but also to prevent them from happening. In this study, we propose a method to learn a network outputting danger probability corresponding to the muscle length online so that the robot can gradually prevent dangers from occurring. Applications of this network for control are also described. The method is applied to the musculoskeletal humanoid, Musashi, and its effectiveness is verified.
{"title":"Online Learning of Danger Avoidance for Complex Structures of Musculoskeletal Humanoids and Its Applications","authors":"Kento Kawaharazuka, Naoki Hiraoka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba","doi":"10.1109/HUMANOIDS47582.2021.9555792","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555792","url":null,"abstract":"The complex structure of musculoskeletal humanoids makes it difficult to model them, and the inter-body interference and high internal muscle force are unavoidable. Although various safety mechanisms have been developed to solve this problem, it is important not only to deal with the dangers when they occur but also to prevent them from happening. In this study, we propose a method to learn a network outputting danger probability corresponding to the muscle length online so that the robot can gradually prevent dangers from occurring. Applications of this network for control are also described. The method is applied to the musculoskeletal humanoid, Musashi, and its effectiveness is verified.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.36227/techrxiv.15034623
S. Yagi, Yoshihiro Nakata, H. Ishiguro
In this paper, we propose the concept of Android Printing, which is printing a full android, including skin and mechanical components in a single run using a multi-material 3D printer. Printing an android all at once both reduces assembly time and enables intricate designs with a high degrees of freedom. To prove this concept, we tested by actual printing an android. First, we printed the skin with multiple annular ridges to test skin deformation. By pulling the skin, we show that the state of deformation of the skin can be adjusted depending on the ridge structure. This result is essential in designing humanlike skin deformations. After that, we designed and fabricated a 3-D printed android head with 31 degrees of freedom. The skin and linkage mechanism were printed together before connecting them to a unit combining several electric motors. To confirm our concept’s feasibility, we created several motions with the android based on human facial movement data. In the future, android printing might enable people to use an android as their own avatar.
{"title":"Android Printing: Towards On-Demand Android Development Employing Multi-Material 3-D Printer","authors":"S. Yagi, Yoshihiro Nakata, H. Ishiguro","doi":"10.36227/techrxiv.15034623","DOIUrl":"https://doi.org/10.36227/techrxiv.15034623","url":null,"abstract":"In this paper, we propose the concept of Android Printing, which is printing a full android, including skin and mechanical components in a single run using a multi-material 3D printer. Printing an android all at once both reduces assembly time and enables intricate designs with a high degrees of freedom. To prove this concept, we tested by actual printing an android. First, we printed the skin with multiple annular ridges to test skin deformation. By pulling the skin, we show that the state of deformation of the skin can be adjusted depending on the ridge structure. This result is essential in designing humanlike skin deformations. After that, we designed and fabricated a 3-D printed android head with 31 degrees of freedom. The skin and linkage mechanism were printed together before connecting them to a unit combining several electric motors. To confirm our concept’s feasibility, we created several motions with the android based on human facial movement data. In the future, android printing might enable people to use an android as their own avatar.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114478682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555802
Rainer Kartmann, Danqing Liu, T. Asfour
Robot understanding of spatial object relations is key for a symbiotic human-robot interaction. Understanding the meaning of such relations between objects in a current scene and target relations specified in natural language commands is essential for the generation of robot manipulation action goals to change the scene by relocating objects relative to each other to fulfill the desired spatial relations. This ability requires a representation of spatial relations, which maps spatial relation symbols extracted from language instructions to subsymbolic object goal locations in the world. We present a generative model of static and dynamic 3D spatial relations between multiple reference objects. The model is based on a parametric probability distribution defined in cylindrical coordinates and is learned from examples provided by humans manipulating a scene in the real world. We demonstrate the ability of our representation to generate suitable object goal positions for a pick-and-place task on a humanoid robot, where object relations specified in natural language commands are extracted, object goal positions are determined and used for parametrizing the actions needed to transfer a given scene into a new one that fulfills the specified relations.
{"title":"Semantic Scene Manipulation Based on 3D Spatial Object Relations and Language Instructions","authors":"Rainer Kartmann, Danqing Liu, T. Asfour","doi":"10.1109/HUMANOIDS47582.2021.9555802","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555802","url":null,"abstract":"Robot understanding of spatial object relations is key for a symbiotic human-robot interaction. Understanding the meaning of such relations between objects in a current scene and target relations specified in natural language commands is essential for the generation of robot manipulation action goals to change the scene by relocating objects relative to each other to fulfill the desired spatial relations. This ability requires a representation of spatial relations, which maps spatial relation symbols extracted from language instructions to subsymbolic object goal locations in the world. We present a generative model of static and dynamic 3D spatial relations between multiple reference objects. The model is based on a parametric probability distribution defined in cylindrical coordinates and is learned from examples provided by humans manipulating a scene in the real world. We demonstrate the ability of our representation to generate suitable object goal positions for a pick-and-place task on a humanoid robot, where object relations specified in natural language commands are extracted, object goal positions are determined and used for parametrizing the actions needed to transfer a given scene into a new one that fulfills the specified relations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117151887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-19DOI: 10.1109/HUMANOIDS47582.2021.9555803
Jakub Rozlivek, Lukas Rustler, K. Štěpánová, M. Hoffmann
The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).
{"title":"Multisensorial robot calibration framework and toolbox","authors":"Jakub Rozlivek, Lukas Rustler, K. Štěpánová, M. Hoffmann","doi":"10.1109/HUMANOIDS47582.2021.9555803","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555803","url":null,"abstract":"The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122339019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}