Pub Date : 2019-09-22DOI: 10.1109/Humanoids43949.2019.9035059
Kourosh Darvish, Yeshasvi Tirupachuri, Giulio Romualdi, Lorenzo Rapetti, Diego Ferigo, Francisco Javier Andrade Chavez, D. Pucci
Humanoid robot teleoperation allows humans to integrate their cognitive capabilities with the apparatus to perform tasks that need high strength, manoeuvrability and dexterity. This paper presents a framework for teleoperation of humanoid robots using a novel approach for motion retargeting through inverse kinematics over the robot model. The proposed method enhances scalability for retargeting, i.e., it allows teleop-erating different robots by different human users with minimal changes to the proposed system. Our framework enables an intuitive and natural interaction between the human operator and the humanoid robot at the configuration space level. We validate our approach by demonstrating whole-body retargeting with multiple robot models. Furthermore, we present experimental validation through teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots.
{"title":"Whole-Body Geometric Retargeting for Humanoid Robots","authors":"Kourosh Darvish, Yeshasvi Tirupachuri, Giulio Romualdi, Lorenzo Rapetti, Diego Ferigo, Francisco Javier Andrade Chavez, D. Pucci","doi":"10.1109/Humanoids43949.2019.9035059","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035059","url":null,"abstract":"Humanoid robot teleoperation allows humans to integrate their cognitive capabilities with the apparatus to perform tasks that need high strength, manoeuvrability and dexterity. This paper presents a framework for teleoperation of humanoid robots using a novel approach for motion retargeting through inverse kinematics over the robot model. The proposed method enhances scalability for retargeting, i.e., it allows teleop-erating different robots by different human users with minimal changes to the proposed system. Our framework enables an intuitive and natural interaction between the human operator and the humanoid robot at the configuration space level. We validate our approach by demonstrating whole-body retargeting with multiple robot models. Furthermore, we present experimental validation through teleoperation experiments using two state-of-the-art whole-body controllers for humanoid robots.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-20DOI: 10.1109/Humanoids43949.2019.9035000
B. Belousov, Alymbek Sadybakasov, Bastian Wibranek, Filipe Veiga, Oliver Tessmann, Jan Peters
Camera-based tactile sensors are emerging as a promising inexpensive solution for tactile-enhanced manipulation tasks. A recently introduced Finger Vision sensor was shown capable of generating reliable signals for force estimation, object pose estimation, and slip detection. In this paper, we build upon the Finger Vision design, improving already existing control algorithms, and, more importantly, expanding its range of applicability to more challenging tasks by utilizing raw skin deformation data for control. In contrast to previous approaches that rely on the average deformation of the whole sensor surface, we directly employ local deviations of each spherical marker immersed in the silicone body of the sensor for feedback control and as input to learning tasks. We show that with such input, substances of varying texture and viscosity can be distinguished on the basis of tactile sensations evoked while stirring them. As another application, we learn a mapping between skin deformation and force applied to an object. To demonstrate the full range of capabilities of the proposed controllers, we deploy them in a challenging architectural assembly task that involves inserting a load-bearing element underneath a bendable plate at the point of maximum load.
{"title":"Building a Library of Tactile Skills Based on FingerVision","authors":"B. Belousov, Alymbek Sadybakasov, Bastian Wibranek, Filipe Veiga, Oliver Tessmann, Jan Peters","doi":"10.1109/Humanoids43949.2019.9035000","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035000","url":null,"abstract":"Camera-based tactile sensors are emerging as a promising inexpensive solution for tactile-enhanced manipulation tasks. A recently introduced Finger Vision sensor was shown capable of generating reliable signals for force estimation, object pose estimation, and slip detection. In this paper, we build upon the Finger Vision design, improving already existing control algorithms, and, more importantly, expanding its range of applicability to more challenging tasks by utilizing raw skin deformation data for control. In contrast to previous approaches that rely on the average deformation of the whole sensor surface, we directly employ local deviations of each spherical marker immersed in the silicone body of the sensor for feedback control and as input to learning tasks. We show that with such input, substances of varying texture and viscosity can be distinguished on the basis of tactile sensations evoked while stirring them. As another application, we learn a mapping between skin deformation and force applied to an object. To demonstrate the full range of capabilities of the proposed controllers, we deploy them in a challenging architectural assembly task that involves inserting a load-bearing element underneath a bendable plate at the point of maximum load.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132129282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-20DOI: 10.1109/Humanoids43949.2019.9035022
Vince Kurtz, Rafael Rodrigues da Silva, Patrick M. Wensing, Hai Lin
Reduced-order template models like the Linear Inverted Pendulum (LIP) and Spring-Loaded Inverted Pendulum (SLIP) are widely used tools for controlling high-dimensional humanoid robots. However, connections between templates and whole-body models have lacked formal underpinnings, preventing formal guarantees when it comes to integrated controller design. We take a small step towards addressing this gap by considering the notion of approximate simulation. Derived from simulation relations for discrete transition systems in formal methods, approximate similarity means that the outputs of two systems can remain $epsilon{-}$close. In this paper, we consider the case of controlling a balancer via planning with the LIP model. We show that the balancer approximately simulates the LIP and derive linear constraints that are sufficient conditions for maintaining ground contact. This allows for rapid planning and replanning with the template model by solving a quadratic program that enforces contact constraints in the full model. We demonstrate the efficacy of this planning and control paradigm in a simulated push recovery scenario for a planar 4-link balancer.
{"title":"Formal Connections between Template and Anchor Models via Approximate Simulation","authors":"Vince Kurtz, Rafael Rodrigues da Silva, Patrick M. Wensing, Hai Lin","doi":"10.1109/Humanoids43949.2019.9035022","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035022","url":null,"abstract":"Reduced-order template models like the Linear Inverted Pendulum (LIP) and Spring-Loaded Inverted Pendulum (SLIP) are widely used tools for controlling high-dimensional humanoid robots. However, connections between templates and whole-body models have lacked formal underpinnings, preventing formal guarantees when it comes to integrated controller design. We take a small step towards addressing this gap by considering the notion of approximate simulation. Derived from simulation relations for discrete transition systems in formal methods, approximate similarity means that the outputs of two systems can remain $epsilon{-}$close. In this paper, we consider the case of controlling a balancer via planning with the LIP model. We show that the balancer approximately simulates the LIP and derive linear constraints that are sufficient conditions for maintaining ground contact. This allows for rapid planning and replanning with the template model by solving a quadratic program that enforces contact constraints in the full model. We demonstrate the efficacy of this planning and control paradigm in a simulated push recovery scenario for a planar 4-link balancer.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128063158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-09DOI: 10.1109/Humanoids43949.2019.9035042
Özge Drama, Alexander Badri-Spröwitz
Creating natural-looking running gaits for humanoid robots is a complex task due to the underactuated degree of freedom in the trunk, which makes the motion planning and control difficult. The research on trunk movements in human locomotion is insufficient, and no formalism is known to transfer human motion patterns onto robots. Related work mostly focuses on the lower extremities, and simplifies the problem by stabilizing the trunk at a fixed angle. In contrast, humans display significant trunk motions that follow the natural dynamics of the gait. In this work, we use a spring-loaded inverted pendulum model with a trunk (TSLIP) together with a virtual point (VP) target to create trunk oscillations and investigate the impact of these movements. We analyze how the VP location and forward speed determine the direction and magnitude of the trunk oscillations. We show that positioning the VP below the center of mass (CoM) can explain the forward trunk pitching observed in human running. The VP below the CoM leads to a synergistic work between the hip and leg, reducing the leg loading. However, it comes at the cost of increased peak hip torque. Our results provide insights for leveraging the trunk motion to redistribute joint loads and potentially improve the energy efficiency in humanoid robots.
{"title":"Trunk Pitch Oscillations for Joint Load Redistribution in Humans and Humanoid Robots","authors":"Özge Drama, Alexander Badri-Spröwitz","doi":"10.1109/Humanoids43949.2019.9035042","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035042","url":null,"abstract":"Creating natural-looking running gaits for humanoid robots is a complex task due to the underactuated degree of freedom in the trunk, which makes the motion planning and control difficult. The research on trunk movements in human locomotion is insufficient, and no formalism is known to transfer human motion patterns onto robots. Related work mostly focuses on the lower extremities, and simplifies the problem by stabilizing the trunk at a fixed angle. In contrast, humans display significant trunk motions that follow the natural dynamics of the gait. In this work, we use a spring-loaded inverted pendulum model with a trunk (TSLIP) together with a virtual point (VP) target to create trunk oscillations and investigate the impact of these movements. We analyze how the VP location and forward speed determine the direction and magnitude of the trunk oscillations. We show that positioning the VP below the center of mass (CoM) can explain the forward trunk pitching observed in human running. The VP below the CoM leads to a synergistic work between the hip and leg, reducing the leg loading. However, it comes at the cost of increased peak hip torque. Our results provide insights for leveraging the trunk motion to redistribute joint loads and potentially improve the energy efficiency in humanoid robots.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127281772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-31DOI: 10.1109/Humanoids43949.2019.9035036
Hao Chen, Weiwei Wan, K. Harada
This paper proposes a combined task and motion planner for a dual-arm robot to use a suction cup tool. The planner consists of three sub-planners - A suction pose sub-planner and two regrasp and motion sub-planners. The suction pose sub-planner finds all the available poses for a suction cup tool to suck on the object, using the models of the tool and the object. The regrasp and motion sub-planner builds the regrasp graph that represents all possible grasp sequences to reorient and move the suction cup tool from an initial pose to a goal pose. Two regrasp graphs are used to plan for a single suction cup and the complex of the suction cup and an object respectively. The output of the proposed planner is a sequence of robot motion that uses a suction cup tool to manipulate objects following human instructions. The planner is examined and analyzed by both simulation experiments and real-world executions using several real-world tasks. The results show that the planner is efficient, robust, and can generate sequential transit and transfer robot motion to finish complicated combined task and motion planning tasks in a few seconds.
{"title":"Combined Task and Motion Planning for a Dual-arm Robot to Use a Suction Cup Tool","authors":"Hao Chen, Weiwei Wan, K. Harada","doi":"10.1109/Humanoids43949.2019.9035036","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035036","url":null,"abstract":"This paper proposes a combined task and motion planner for a dual-arm robot to use a suction cup tool. The planner consists of three sub-planners - A suction pose sub-planner and two regrasp and motion sub-planners. The suction pose sub-planner finds all the available poses for a suction cup tool to suck on the object, using the models of the tool and the object. The regrasp and motion sub-planner builds the regrasp graph that represents all possible grasp sequences to reorient and move the suction cup tool from an initial pose to a goal pose. Two regrasp graphs are used to plan for a single suction cup and the complex of the suction cup and an object respectively. The output of the proposed planner is a sequence of robot motion that uses a suction cup tool to manipulate objects following human instructions. The planner is examined and analyzed by both simulation experiments and real-world executions using several real-world tasks. The results show that the planner is efficient, robust, and can generate sequential transit and transfer robot motion to finish complicated combined task and motion planning tasks in a few seconds.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129545029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-26DOI: 10.1109/Humanoids43949.2019.9336618
Shuxiao Chen, Jonathan D. Rogers, Bike Zhang, K. Sreenath
Motivated towards achieving multi-modal locomotion, in this paper, we develop a framework for a bipedal robot to dynamically ride a pair of Hovershoes over various terrain. Our developed control strategy enables the Cassie bipedal robot to interact with the Hovershoes to balance, regulate forward and rotational velocities, achieve fast turns, and move over flat terrain, slopes, stairs, and rough outdoor terrain. Our sensor suite comprising of tracking and depth cameras for visual SLAM as well as our Dijkstra-based global planner and timed elastic band-based local planning framework enables us to achieve autonomous riding on the Hovershoes while navigating an obstacle course. We present numerical and experimental validations of our work.
{"title":"Feedback Control for Autonomous Riding of Hovershoes by a Cassie Bipedal Robot","authors":"Shuxiao Chen, Jonathan D. Rogers, Bike Zhang, K. Sreenath","doi":"10.1109/Humanoids43949.2019.9336618","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9336618","url":null,"abstract":"Motivated towards achieving multi-modal locomotion, in this paper, we develop a framework for a bipedal robot to dynamically ride a pair of Hovershoes over various terrain. Our developed control strategy enables the Cassie bipedal robot to interact with the Hovershoes to balance, regulate forward and rotational velocities, achieve fast turns, and move over flat terrain, slopes, stairs, and rough outdoor terrain. Our sensor suite comprising of tracking and depth cameras for visual SLAM as well as our Dijkstra-based global planner and timed elastic band-based local planning framework enables us to achieve autonomous riding on the Hovershoes while navigating an obstacle course. We present numerical and experimental validations of our work.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121079184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-19DOI: 10.1109/Humanoids43949.2019.9035046
Robert J. Griffin, Georg Wiedebach, Stephen McCrory, S. Bertrand, Inho Lee, J. Pratt
To increase the speed of operation and reduce operator burden, humanoid robots must be able to function autonomously, even in complex, cluttered environments. For this to be possible, they must be able to quickly and efficiently compute desired footsteps to reach a goal. In this work, we present a new A * footstep planner that utilizes a planar region representation of the environment enable footstep planning over rough terrain. To increase the number of available footholds, we present an approach to allow the use of partial footholds during the planning process. The footstep plan solutions are then post-processed to capture better solutions that lie between the lattice discretization of the footstep graph. We then demonstrate this planner over a variety of virtual and real world environments, including some that require partial footholds and rough terrain using the Atlas and Valkyrie humanoid robots.
{"title":"Footstep Planning for Autonomous Walking Over Rough Terrain","authors":"Robert J. Griffin, Georg Wiedebach, Stephen McCrory, S. Bertrand, Inho Lee, J. Pratt","doi":"10.1109/Humanoids43949.2019.9035046","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035046","url":null,"abstract":"To increase the speed of operation and reduce operator burden, humanoid robots must be able to function autonomously, even in complex, cluttered environments. For this to be possible, they must be able to quickly and efficiently compute desired footsteps to reach a goal. In this work, we present a new A * footstep planner that utilizes a planar region representation of the environment enable footstep planning over rough terrain. To increase the number of available footholds, we present an approach to allow the use of partial footholds during the planning process. The footstep plan solutions are then post-processed to capture better solutions that lie between the lattice discretization of the footstep graph. We then demonstrate this planner over a variety of virtual and real world environments, including some that require partial footholds and rough terrain using the Atlas and Valkyrie humanoid robots.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-15DOI: 10.1109/Humanoids43949.2019.9035005
Médéric Fourmy, Dinesh Atchuthan, N. Mansard, J. Solà, T. Flayols
Current locomotion algorithms in structured (in-door) 3D environments require an accurate localization. The several and diverse sensors typically embedded on legged robots (IMU, coders, vision and/or LIDARS) should make it possible if properly fused. Yet this is a difficult task due to the heterogeneity of these sensors and the real-time requirement of the control. While previous works were using staggered approaches (odometry at high frequency, sparsely corrected from vision and LIDAR localization), the recent progress in optimal estimation, in particular in visual-inertial localization, is paving the way to a holistic fusion. This paper is a contribution in this direction. We propose to quantify how a visual-inertial navigation system can accurately localize a humanoid robot in a 3D indoor environment tagged with fiducial markers. We introduce a theoretical contribution strengthening the formulation of Forster's IMU pre-integration, a practical contribution to avoid possible ambiguity raised by pose estimation of fiducial markers, and an experimental contribution on a humanoid dataset with ground truth. Our system is able to localize the robot with less than 2 cm errors once the environment is properly mapped. This would naturally extend to additional measurements corresponding to leg odometry (kinematic factors) thanks to the genericity of the proposed pre-integration algebra.
{"title":"Absolute humanoid localization and mapping based on IMU Lie group and fiducial markers","authors":"Médéric Fourmy, Dinesh Atchuthan, N. Mansard, J. Solà, T. Flayols","doi":"10.1109/Humanoids43949.2019.9035005","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035005","url":null,"abstract":"Current locomotion algorithms in structured (in-door) 3D environments require an accurate localization. The several and diverse sensors typically embedded on legged robots (IMU, coders, vision and/or LIDARS) should make it possible if properly fused. Yet this is a difficult task due to the heterogeneity of these sensors and the real-time requirement of the control. While previous works were using staggered approaches (odometry at high frequency, sparsely corrected from vision and LIDAR localization), the recent progress in optimal estimation, in particular in visual-inertial localization, is paving the way to a holistic fusion. This paper is a contribution in this direction. We propose to quantify how a visual-inertial navigation system can accurately localize a humanoid robot in a 3D indoor environment tagged with fiducial markers. We introduce a theoretical contribution strengthening the formulation of Forster's IMU pre-integration, a practical contribution to avoid possible ambiguity raised by pose estimation of fiducial markers, and an experimental contribution on a humanoid dataset with ground truth. Our system is able to localize the robot with less than 2 cm errors once the environment is properly mapped. This would naturally extend to additional measurements corresponding to leg odometry (kinematic factors) thanks to the genericity of the proposed pre-integration algebra.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115035191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-13DOI: 10.1109/Humanoids43949.2019.9035068
Filippo M. Smaldone, Nicola Scianca, Valerio Modugno, L. Lanari, G. Oriolo
From a control point of view, humanoid gait generation can be seen as a problem of tracking a suitable ZMP trajectory while guaranteeing internal stability. In the presence of disturbances, both these aspects are at risk, and a fall may ultimately occur. In this paper, we extend our previously proposed Intrinsically Stable MPC (IS-MPC) method, which guarantees stable tracking for the unperturbed case, to the case of persistent disturbances. This is achieved by designing a disturbance observer whose estimate is used to set up a modified stability constraint for the QP problem. The method is validated by MATLAB tests as well as dynamic simulations for a NAO humanoid in DART.
{"title":"Gait Generation using Intrinsically Stable MPC in the Presence of Persistent Disturbances","authors":"Filippo M. Smaldone, Nicola Scianca, Valerio Modugno, L. Lanari, G. Oriolo","doi":"10.1109/Humanoids43949.2019.9035068","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035068","url":null,"abstract":"From a control point of view, humanoid gait generation can be seen as a problem of tracking a suitable ZMP trajectory while guaranteeing internal stability. In the presence of disturbances, both these aspects are at risk, and a fall may ultimately occur. In this paper, we extend our previously proposed Intrinsically Stable MPC (IS-MPC) method, which guarantees stable tracking for the unperturbed case, to the case of persistent disturbances. This is achieved by designing a disturbance observer whose estimate is used to set up a modified stability constraint for the QP problem. The method is validated by MATLAB tests as well as dynamic simulations for a NAO humanoid in DART.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"75 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128447702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-10DOI: 10.1109/Humanoids43949.2019.9035003
Mohammad Hasan Yeganegi, M. Khadiv, S. Moosavian, Jia-Jie Zhu, A. Prete, L. Righetti
Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to intractable problems. Furthermore, since the models used in TO have always some level of abstraction, it can be hard to find a realistic set of uncertainties in the model space. In this paper we leverage a sample-efficient learning technique (Bayesian optimization) to robustify TO for humanoid locomotion. The main idea is to use data from full-body simulations to make the TO stage robust by tuning the cost weights. To this end, we split the TO problem into two phases. The first phase solves a convex optimization problem for generating center of mass (CoM) trajectories based on simplified linear dynamics. The second stage employs iterative Linear-Quadratic Gaussian (iLQG) as a whole-body controller to generate full body control inputs. Then we use Bayesian optimization to find the cost weights to use in the first stage that yields robust performance in the simulation/experiment, in the presence of different disturbance/uncertainties. The results show that the proposed approach is able to generate robust motions for different sets of disturbances and uncertainties.
{"title":"Robust Humanoid Locomotion Using Trajectory Optimization and Sample-Efficient Learning*","authors":"Mohammad Hasan Yeganegi, M. Khadiv, S. Moosavian, Jia-Jie Zhu, A. Prete, L. Righetti","doi":"10.1109/Humanoids43949.2019.9035003","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035003","url":null,"abstract":"Trajectory optimization (TO) is one of the most powerful tools for generating feasible motions for humanoid robots. However, including uncertainties and stochasticity in the TO problem to generate robust motions can easily lead to intractable problems. Furthermore, since the models used in TO have always some level of abstraction, it can be hard to find a realistic set of uncertainties in the model space. In this paper we leverage a sample-efficient learning technique (Bayesian optimization) to robustify TO for humanoid locomotion. The main idea is to use data from full-body simulations to make the TO stage robust by tuning the cost weights. To this end, we split the TO problem into two phases. The first phase solves a convex optimization problem for generating center of mass (CoM) trajectories based on simplified linear dynamics. The second stage employs iterative Linear-Quadratic Gaussian (iLQG) as a whole-body controller to generate full body control inputs. Then we use Bayesian optimization to find the cost weights to use in the first stage that yields robust performance in the simulation/experiment, in the presence of different disturbance/uncertainties. The results show that the proposed approach is able to generate robust motions for different sets of disturbances and uncertainties.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130291722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}