Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035002
Simon Ottenhaus, Daniel Renninghoff, Raphael Grimm, Fábio Ferreira, T. Asfour
Grasping unknown objects is a challenging task for humanoid robots, as planning and execution have to cope with noisy sensor data. This work presents a framework, which integrates sensing, planning and acting in one visuo-haptic grasping pipeline. Visual and tactile perception are fused using Gaussian Process Implicit Surfaces to estimate the object surface. Two grasp planners then generate grasp candidates, which are used to train a neural network to determine the best grasp. The main contribution of this work is the introduction of a discriminative deep neural network for scoring grasp hypotheses for underactuated humanoid hands. The pipeline delivers full 6D grasp poses for multi-fingered humanoid hands but it is not limited to any specific gripper. The pipeline is trained and evaluated in simulation, based on objects from the YCB and KIT object sets, resulting in a 95 % success rate regarding force-closure. To prove the validity of the proposed approach, the pipeline is executed on the humanoid robot ARMAR-6 in experiments with eight non-trivial objects using an underactuated five finger hand.
{"title":"Visuo-Haptic Grasping of Unknown Objects based on Gaussian Process Implicit Surfaces and Deep Learning","authors":"Simon Ottenhaus, Daniel Renninghoff, Raphael Grimm, Fábio Ferreira, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035002","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035002","url":null,"abstract":"Grasping unknown objects is a challenging task for humanoid robots, as planning and execution have to cope with noisy sensor data. This work presents a framework, which integrates sensing, planning and acting in one visuo-haptic grasping pipeline. Visual and tactile perception are fused using Gaussian Process Implicit Surfaces to estimate the object surface. Two grasp planners then generate grasp candidates, which are used to train a neural network to determine the best grasp. The main contribution of this work is the introduction of a discriminative deep neural network for scoring grasp hypotheses for underactuated humanoid hands. The pipeline delivers full 6D grasp poses for multi-fingered humanoid hands but it is not limited to any specific gripper. The pipeline is trained and evaluated in simulation, based on objects from the YCB and KIT object sets, resulting in a 95 % success rate regarding force-closure. To prove the validity of the proposed approach, the pipeline is executed on the humanoid robot ARMAR-6 in experiments with eight non-trivial objects using an underactuated five finger hand.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130063814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035054
Felix Hundhausen, Denis Megerle, T. Asfour
Myoelectric control of prosthetic hands relies on electromyographic (EMG) signals captured by usually two surface electrodes attached to the human body in different setups. Controlling the hand by the user requires long training and depends heavily on the robustness of the EMG signals. In this paper, we present a visual perception system to extract scene information for semi-autonomous hand-control that allows minimizing required command complexity and leads to more intuitive and effortless control. We present methods that are optimized towards minimal resource demand to derive scene information from visual data from a camera inside the hand. In particular, we show object classification and semantic segmentation of image data realized by convolutional neural networks (CNNs). We present a system architecture, that takes user feedback into account and thereby improves results. In addition, we present an evolutionary algorithm to optimize CNN architecture regarding accuracy and hardware resource demand. Our evaluation shows classification accuracy of 96.5% and segmentation accuracy of up to 89.5% on an in-hand Arm Cortex-H7 microcontroller running at only 400 MHz.
{"title":"Resource-Aware Object Classification and Segmentation for Semi-Autonomous Grasping with Prosthetic Hands","authors":"Felix Hundhausen, Denis Megerle, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035054","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035054","url":null,"abstract":"Myoelectric control of prosthetic hands relies on electromyographic (EMG) signals captured by usually two surface electrodes attached to the human body in different setups. Controlling the hand by the user requires long training and depends heavily on the robustness of the EMG signals. In this paper, we present a visual perception system to extract scene information for semi-autonomous hand-control that allows minimizing required command complexity and leads to more intuitive and effortless control. We present methods that are optimized towards minimal resource demand to derive scene information from visual data from a camera inside the hand. In particular, we show object classification and semantic segmentation of image data realized by convolutional neural networks (CNNs). We present a system architecture, that takes user feedback into account and thereby improves results. In addition, we present an evolutionary algorithm to optimize CNN architecture regarding accuracy and hardware resource demand. Our evaluation shows classification accuracy of 96.5% and segmentation accuracy of up to 89.5% on an in-hand Arm Cortex-H7 microcontroller running at only 400 MHz.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130250134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035048
Kevin Hitzler, Franziska Meier, S. Schaal, T. Asfour
Performing tasks with high accuracy while interacting with the real world requires a robot to have an exact representation of its inverse dynamics that can be adapted to new situations. In the past, various methods for learning inverse dynamics models have been proposed that combine the well-known rigid body dynamics with model-based parameter estimation, or learn directly on measured data using regression. However, there are still open questions regarding the efficiency of model-based learning compared to data-driven approaches as well as their capabilities to adapt to changing dynamics. In this paper, we compare the state-of-the-art inertial parameter estimation to a purely data-driven and a model-based approach on simulated and real data, collected with the humanoid robot Apollo. We further compare the adaptation capabilities of two models in a pick and place scenario while a) learning the model incrementally and b) extending the initially learned model with an error model. Based on this, we show the gap between simulation and reality and verify the importance of modeling nonlinear effects using regression. Furthermore, we demonstrate that error models outperform incremental learning regarding adaptation of inverse dynamics models.
{"title":"Learning and Adaptation of Inverse Dynamics Models: A Comparison","authors":"Kevin Hitzler, Franziska Meier, S. Schaal, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035048","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035048","url":null,"abstract":"Performing tasks with high accuracy while interacting with the real world requires a robot to have an exact representation of its inverse dynamics that can be adapted to new situations. In the past, various methods for learning inverse dynamics models have been proposed that combine the well-known rigid body dynamics with model-based parameter estimation, or learn directly on measured data using regression. However, there are still open questions regarding the efficiency of model-based learning compared to data-driven approaches as well as their capabilities to adapt to changing dynamics. In this paper, we compare the state-of-the-art inertial parameter estimation to a purely data-driven and a model-based approach on simulated and real data, collected with the humanoid robot Apollo. We further compare the adaptation capabilities of two models in a pick and place scenario while a) learning the model incrementally and b) extending the initially learned model with an error model. Based on this, we show the gap between simulation and reality and verify the importance of modeling nonlinear effects using regression. Furthermore, we demonstrate that error models outperform incremental learning regarding adaptation of inverse dynamics models.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134001088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035069
Haitao Zhou, Xu Li, Haibo Feng, Jiachen Li, Songyuan Zhang, Yili Fu
The wheeled humanoid robot is such a new type of robot that combines both the humanoid structure and the Wheeled Inverted Pendulum (WIP) base. They are able to move rapidly on flat ground as well as stand still on the slope, which has been well demonstrated on the WLR-II robot in this paper. In order to achieve it, a novel but simplified control framework is designed, which comprises of two main modules, the wheel balance controller and the centroidal adjustment controller. The former controller helps to maintain balance of the robotic system by rotating the wheel to move forward or backward, while the latter controller works by moving the Center of Mass (CoM) of the robot at a distance from the equilibrium point, which will result in a specified acceleration used to drive the first wheel balance controller. In order to design such these two controllers, the dynamic model of the robot in sagittal plane is decoupled into two relatively simplified model. In particular, the coupled dynamics between each other is significantly considered and alleviated. Experiments conducted on the WLR-II robot show that the proposed control framework can make the robot both accurately track the velocity tajectory and steadily stand on the slope.
{"title":"Model Decoupling and Control of the Wheeled Humanoid Robot Moving in Sagittal Plane","authors":"Haitao Zhou, Xu Li, Haibo Feng, Jiachen Li, Songyuan Zhang, Yili Fu","doi":"10.1109/Humanoids43949.2019.9035069","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035069","url":null,"abstract":"The wheeled humanoid robot is such a new type of robot that combines both the humanoid structure and the Wheeled Inverted Pendulum (WIP) base. They are able to move rapidly on flat ground as well as stand still on the slope, which has been well demonstrated on the WLR-II robot in this paper. In order to achieve it, a novel but simplified control framework is designed, which comprises of two main modules, the wheel balance controller and the centroidal adjustment controller. The former controller helps to maintain balance of the robotic system by rotating the wheel to move forward or backward, while the latter controller works by moving the Center of Mass (CoM) of the robot at a distance from the equilibrium point, which will result in a specified acceleration used to drive the first wheel balance controller. In order to design such these two controllers, the dynamic model of the robot in sagittal plane is decoupled into two relatively simplified model. In particular, the coupled dynamics between each other is significantly considered and alleviated. Experiments conducted on the WLR-II robot show that the proposed control framework can make the robot both accurately track the velocity tajectory and steadily stand on the slope.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132907602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035019
Hedyeh Jafari, G. Nikolakopoulos, T. Gustafsson
The human body is mechanically unstable, while the brain as the main controller, is responsible to maintain our balance. However, the mechanisms of the brain towards balancing are still an open research question and thus in this article, we propose a novel modeling architecture for replicating and understanding the fundamental mechanisms for generating balance in the humans. Towards this aim, a nonlinear Recurrent Neural Network (RNN) has been proposed and trained that has the ability to predict the performance of the Central Nervous System (CNS) in stabilizing the human body with high accuracy and that has been trained based on multiple collected human based balancing data and by utilizing system identification techniques. One fundamental contribution of the article is the fact that the obtained network, for the balancing mechanisms, is experimentally evaluated on a single link inverted pendulum that replicates the basic model of the human balance and can be directly extended in the area of humanoids and balancing exoskeletons.
{"title":"Stabilization of an Inverted Pendulum via Human Brain Inspired Controller Design","authors":"Hedyeh Jafari, G. Nikolakopoulos, T. Gustafsson","doi":"10.1109/Humanoids43949.2019.9035019","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035019","url":null,"abstract":"The human body is mechanically unstable, while the brain as the main controller, is responsible to maintain our balance. However, the mechanisms of the brain towards balancing are still an open research question and thus in this article, we propose a novel modeling architecture for replicating and understanding the fundamental mechanisms for generating balance in the humans. Towards this aim, a nonlinear Recurrent Neural Network (RNN) has been proposed and trained that has the ability to predict the performance of the Central Nervous System (CNS) in stabilizing the human body with high accuracy and that has been trained based on multiple collected human based balancing data and by utilizing system identification techniques. One fundamental contribution of the article is the fact that the obtained network, for the balancing mechanisms, is experimentally evaluated on a single link inverted pendulum that replicates the basic model of the human balance and can be directly extended in the area of humanoids and balancing exoskeletons.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"113 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120886371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035070
Waldez Gomes, Vishnu Radhakrishnan, Luigi Penco, Valerio Modugno, Jean-Baptiste Mouret, S. Ivaldi
Motion retargeting and teleoperation are powerful tools to demonstrate complex whole-body movements to humanoid robots: in a sense, they are the equivalent of kinesthetic teaching for manipulators. However, retargeted motions may not be optimal for the robot: because of different kinematics and dynamics, there could be other robot trajectories that perform the same task more efficiently, for example with less power consumption. We propose to use the retargeted trajectories to bootstrap a learning process aimed at optimizing the whole-body trajectories w.r.t. a specified cost function. To ensure that the optimized motions are safe, i.e., they do not violate system constraints, we use constrained optimization algorithms. We compare both global and local optimization approaches, since the optimized robot solution may not be close to the demonstrated one. We evaluate our framework with the humanoid robot iCub on an object lifting scenario, initially demonstrated by a human operator wearing a motion-tracking suit. By optimizing the initial retargeted movements, we can improve robot performance by over 40%.
{"title":"Humanoid Whole-Body Movement Optimization from Retargeted Human Motions","authors":"Waldez Gomes, Vishnu Radhakrishnan, Luigi Penco, Valerio Modugno, Jean-Baptiste Mouret, S. Ivaldi","doi":"10.1109/Humanoids43949.2019.9035070","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035070","url":null,"abstract":"Motion retargeting and teleoperation are powerful tools to demonstrate complex whole-body movements to humanoid robots: in a sense, they are the equivalent of kinesthetic teaching for manipulators. However, retargeted motions may not be optimal for the robot: because of different kinematics and dynamics, there could be other robot trajectories that perform the same task more efficiently, for example with less power consumption. We propose to use the retargeted trajectories to bootstrap a learning process aimed at optimizing the whole-body trajectories w.r.t. a specified cost function. To ensure that the optimized motions are safe, i.e., they do not violate system constraints, we use constrained optimization algorithms. We compare both global and local optimization approaches, since the optimized robot solution may not be close to the demonstrated one. We evaluate our framework with the humanoid robot iCub on an object lifting scenario, initially demonstrated by a human operator wearing a motion-tracking suit. By optimizing the initial retargeted movements, we can improve robot performance by over 40%.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124911141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035035
Y. Tazaki
A new leg design for bipedal walking robots that utilizes 6-dof parallel link mechanism is proposed. Reducing leg inertia is a crucial requirement for realizing agile walking and fall avoidance involving multiple stepping. The proposed parallel link design enables embedding all leg actuators in the torso and thereby significantly reducing the leg inertia. Some fundamental kinematic characteristics of the proposed leg mechanism including movable range and maximum static load is shown. A real small-sized humanoid robot equipped with the proposed leg mechanism is developed. Experimental results show that the proposed leg mechanism achieves high position tracking performance even at high frequencies, and that the robot is able to perform basic walking maneuvers with different strides and step durations.
{"title":"Parallel Link-based Light-Weight Leg Design for Bipedal Robots","authors":"Y. Tazaki","doi":"10.1109/Humanoids43949.2019.9035035","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035035","url":null,"abstract":"A new leg design for bipedal walking robots that utilizes 6-dof parallel link mechanism is proposed. Reducing leg inertia is a crucial requirement for realizing agile walking and fall avoidance involving multiple stepping. The proposed parallel link design enables embedding all leg actuators in the torso and thereby significantly reducing the leg inertia. Some fundamental kinematic characteristics of the proposed leg mechanism including movable range and maximum static load is shown. A real small-sized humanoid robot equipped with the proposed leg mechanism is developed. Experimental results show that the proposed leg mechanism achieves high position tracking performance even at high frequencies, and that the robot is able to perform basic walking maneuvers with different strides and step durations.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122530231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9034993
Steven Jens Jorgensen, Michael Lanighan, S. Bertrand, Andrew Watson, Joseph S. Altemus, R. Askew, Lyndon B. Bridgwater, Beau B. Domingue, Charlie Kendrick, Jason Lee, Mark Paterson, Jairo Sanchez, P. Beeson, Seth Gee, Stephen Hart, A. H. Quispe, Robert J. Griffin, Inho Lee, Stephen McCrory, L. Sentis, J. Pratt, Joshua S. Mehling
As part of a feasibility study, this paper shows the NASA Valkyrie humanoid robot performing an end-to-end improvised explosive device (IED) response task. To demonstrate and evaluate robot capabilities, sub-tasks highlight different locomotion, manipulation, and perception requirements: traversing uneven terrain, passing through a narrow passageway, opening a car door, retrieving a suspected IED, and securing the IED in a total containment vessel (TCV). For each sub-task, a description of the technical approach and the hidden challenges that were overcome during development are presented. The discussion of results, which explicitly includes existing limitations, is aimed at motivating continued research and development to enable practical deployment of humanoid robots for IED response. For instance, the data shows that operator pauses contribute to 50% of the total completion time, which implies that further work is needed on user interfaces for increasing task completion efficiency.**Disclaimer: Trade names and trademarks are used in this report for identification only. Their usage does not constitute an official endorsement, either expressed or implied, by the National Aeronautics and Space Administration
{"title":"Deploying the NASA Valkyrie Humanoid for IED Response: An Initial Approach and Evaluation Summary","authors":"Steven Jens Jorgensen, Michael Lanighan, S. Bertrand, Andrew Watson, Joseph S. Altemus, R. Askew, Lyndon B. Bridgwater, Beau B. Domingue, Charlie Kendrick, Jason Lee, Mark Paterson, Jairo Sanchez, P. Beeson, Seth Gee, Stephen Hart, A. H. Quispe, Robert J. Griffin, Inho Lee, Stephen McCrory, L. Sentis, J. Pratt, Joshua S. Mehling","doi":"10.1109/Humanoids43949.2019.9034993","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034993","url":null,"abstract":"As part of a feasibility study, this paper shows the NASA Valkyrie humanoid robot performing an end-to-end improvised explosive device (IED) response task. To demonstrate and evaluate robot capabilities, sub-tasks highlight different locomotion, manipulation, and perception requirements: traversing uneven terrain, passing through a narrow passageway, opening a car door, retrieving a suspected IED, and securing the IED in a total containment vessel (TCV). For each sub-task, a description of the technical approach and the hidden challenges that were overcome during development are presented. The discussion of results, which explicitly includes existing limitations, is aimed at motivating continued research and development to enable practical deployment of humanoid robots for IED response. For instance, the data shows that operator pauses contribute to 50% of the total completion time, which implies that further work is needed on user interfaces for increasing task completion efficiency.**Disclaimer: Trade names and trademarks are used in this report for identification only. Their usage does not constitute an official endorsement, either expressed or implied, by the National Aeronautics and Space Administration","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133316469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035032
M. Nicolescu, Natalie Arnold, Janelle Blankenburg, David Feil-Seifer, S. Banisetty, M. Nicolescu, Andrew H. Palmer, Thor Monteverde
This paper presents a novel approach to robot task learning from language-based instructions, which focuses on increasing the complexity of task representations that can be taught through verbal instruction. The major proposed contribution is the development of a framework for directly mapping a complex verbal instruction to an executable task representation, from a single training experience. The method can handle the following types of complexities: 1) instructions that use conjunctions to convey complex execution constraints (such as alternative paths of execution, sequential or non-ordering constraints, as well as hierarchical representations) and 2) instructions that use prepositions and multiple adjectives to specify action/object parameters relevant for the task. Specific algorithms have been developed for handling conjunctions, adjectives and prepositions as well as for translating the parsed instructions into parameterized executable task representations. The paper describes validation experiments with a PR2 humanoid robot learning new tasks from verbal instruction, as well as an additional range of utterances that can be parsed into executable controllers by the proposed system.
{"title":"Learning of Complex-Structured Tasks from Verbal Instruction","authors":"M. Nicolescu, Natalie Arnold, Janelle Blankenburg, David Feil-Seifer, S. Banisetty, M. Nicolescu, Andrew H. Palmer, Thor Monteverde","doi":"10.1109/Humanoids43949.2019.9035032","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035032","url":null,"abstract":"This paper presents a novel approach to robot task learning from language-based instructions, which focuses on increasing the complexity of task representations that can be taught through verbal instruction. The major proposed contribution is the development of a framework for directly mapping a complex verbal instruction to an executable task representation, from a single training experience. The method can handle the following types of complexities: 1) instructions that use conjunctions to convey complex execution constraints (such as alternative paths of execution, sequential or non-ordering constraints, as well as hierarchical representations) and 2) instructions that use prepositions and multiple adjectives to specify action/object parameters relevant for the task. Specific algorithms have been developed for handling conjunctions, adjectives and prepositions as well as for translating the parsed instructions into parameterized executable task representations. The paper describes validation experiments with a PR2 humanoid robot learning new tasks from verbal instruction, as well as an additional range of utterances that can be parsed into executable controllers by the proposed system.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121553253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9034998
Lipeng Chen, Luis F. C. Figueredo, M. Dogar
This paper addresses the problem of sequential manipulation planning to keep an object stable under changing external forces. Particularly, we focus on using object-environment contacts. We present a planning algorithm which can generate robot configurations and motions to intelligently use object-environment, as well as object-robot, contacts, to keep an object stable under forceful operations such as drilling and cutting. Given a sequence of external forces, the planner minimizes the number of different configurations used to keep the object stable. An important computational bottleneck in this algorithm is due to the static stability analysis of a large number of configurations. We propose a containment relationship between configurations, to prune the stability checking process.
{"title":"Manipulation Planning Using Environmental Contacts to Keep Objects Stable under External Forces","authors":"Lipeng Chen, Luis F. C. Figueredo, M. Dogar","doi":"10.1109/Humanoids43949.2019.9034998","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034998","url":null,"abstract":"This paper addresses the problem of sequential manipulation planning to keep an object stable under changing external forces. Particularly, we focus on using object-environment contacts. We present a planning algorithm which can generate robot configurations and motions to intelligently use object-environment, as well as object-robot, contacts, to keep an object stable under forceful operations such as drilling and cutting. Given a sequence of external forces, the planner minimizes the number of different configurations used to keep the object stable. An important computational bottleneck in this algorithm is due to the static stability analysis of a large number of configurations. We propose a containment relationship between configurations, to prune the stability checking process.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125180579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}