Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035021
Kaixuan Guan, Ko Yamamoto, Yoshihiko Nakamura
During walking, humanoid robots may encounter many exogenous disturbances under unknown environments. Enabling humanoid robots to have applicability to resisting such disturbances is very important. Therefore, push recovery is an interesting issue attracting many humanoid robot researchers. Previous works based on Linear Inverted Pendulum (LIP) model have limitations because of the limitations of the LIP models. In this paper, we extend the usage of the Virtual-mass-ellipsoid Inverted Pendulum (VIP) model proposed in our previous work to recover orientational push by angular momentum control. Our proposed methods have three main characteristics. Can recover push during walking (not just during standing); Be suitable to 3D uneven terrains; Most significantly, completely remove the constant center of mass (CoM) height (or height on a constant slope) limitation and the constant centroidal angular momentum (CAM) limitation. Simulations using HRP4 have validated the effectiveness of our proposed methods.
{"title":"Push Recovery by Angular Momentum Control during 3D Bipedal Walking based on Virtual-mass-ellipsoid Inverted Pendulum Model","authors":"Kaixuan Guan, Ko Yamamoto, Yoshihiko Nakamura","doi":"10.1109/Humanoids43949.2019.9035021","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035021","url":null,"abstract":"During walking, humanoid robots may encounter many exogenous disturbances under unknown environments. Enabling humanoid robots to have applicability to resisting such disturbances is very important. Therefore, push recovery is an interesting issue attracting many humanoid robot researchers. Previous works based on Linear Inverted Pendulum (LIP) model have limitations because of the limitations of the LIP models. In this paper, we extend the usage of the Virtual-mass-ellipsoid Inverted Pendulum (VIP) model proposed in our previous work to recover orientational push by angular momentum control. Our proposed methods have three main characteristics. Can recover push during walking (not just during standing); Be suitable to 3D uneven terrains; Most significantly, completely remove the constant center of mass (CoM) height (or height on a constant slope) limitation and the constant centroidal angular momentum (CAM) limitation. Simulations using HRP4 have validated the effectiveness of our proposed methods.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035074
L. Rossini, Bernd Henze, F. Braghin, M. Roa
Humanoid robots are being introduced in multiple environments, including houses, health care facilities or factories. Bipedal robots are, however, inherently unstable, and they might fall due to multiple reasons, including internal failures or external perturbations. In these situations, the robot should guarantee as much as possible the integrity of humans in the workspace, of the environment, and of the robot itself. When there is some control authority left on the robot, it can be actively commanded to follow a predefined trajectory that minimizes the consequences of the impact with the ground. This paper presents the computation of an optimal falling trajectory using a telescopic inverted pendulum model, which translates into squatting and stretching motions in the robot to dissipate as much energy as possible. The results show that the prescribed trajectory is effective for maximizing the dissipated energy before the actual impact.
{"title":"Optimal Trajectory for Active Safe Falls in Humanoid Robots","authors":"L. Rossini, Bernd Henze, F. Braghin, M. Roa","doi":"10.1109/Humanoids43949.2019.9035074","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035074","url":null,"abstract":"Humanoid robots are being introduced in multiple environments, including houses, health care facilities or factories. Bipedal robots are, however, inherently unstable, and they might fall due to multiple reasons, including internal failures or external perturbations. In these situations, the robot should guarantee as much as possible the integrity of humans in the workspace, of the environment, and of the robot itself. When there is some control authority left on the robot, it can be actively commanded to follow a predefined trajectory that minimizes the consequences of the impact with the ground. This paper presents the computation of an optimal falling trajectory using a telescopic inverted pendulum model, which translates into squatting and stretching motions in the robot to dissipate as much energy as possible. The results show that the prescribed trajectory is effective for maximizing the dissipated energy before the actual impact.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124940826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035076
Philipp Seiwald, Felix Sygulla, Nora-Sophie Staufenberg, D. Rixen
This paper presents our newest findings in planning a dynamically and kinematically feasible center of mass motion for bipedal walking robots. We use a simplified robot model to incorporate multi-body dynamics and kinematic limits, while still being able to meet hard real-time requirements. The vertical center of mass motion is obtained through interpolation of a quintic spline whose control points are projected onto the kinematically feasible region. Subsequently, the horizontal motion is computed from multi-body dynamics which we approximate by solving an overdetermined boundary value problem via spline collocation based on quintic polynomials. The proposed algorithm is an improvement of our previous method, which used a parametric torso height optimization for vertical and cubic spline collocation for horizontal components. The novel center of mass motion improves stability, especially for stepping up and down platforms. Moreover, the new method leads to a less complex overall algorithm since it removes the necessity of manually tuned parameters and strongly simplifies the incorporation of boundary values. Lastly, the new approach is more efficient, which leads to a significantly reduced total runtime. The proposed method is validated through successfully conducted simulations and experiments on our humanoid robot platform, LoLA.
{"title":"Quintic Spline Collocation for Real-Time Biped Walking-Pattern Generation with variable Torso Height","authors":"Philipp Seiwald, Felix Sygulla, Nora-Sophie Staufenberg, D. Rixen","doi":"10.1109/Humanoids43949.2019.9035076","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035076","url":null,"abstract":"This paper presents our newest findings in planning a dynamically and kinematically feasible center of mass motion for bipedal walking robots. We use a simplified robot model to incorporate multi-body dynamics and kinematic limits, while still being able to meet hard real-time requirements. The vertical center of mass motion is obtained through interpolation of a quintic spline whose control points are projected onto the kinematically feasible region. Subsequently, the horizontal motion is computed from multi-body dynamics which we approximate by solving an overdetermined boundary value problem via spline collocation based on quintic polynomials. The proposed algorithm is an improvement of our previous method, which used a parametric torso height optimization for vertical and cubic spline collocation for horizontal components. The novel center of mass motion improves stability, especially for stepping up and down platforms. Moreover, the new method leads to a less complex overall algorithm since it removes the necessity of manually tuned parameters and strongly simplifies the incorporation of boundary values. Lastly, the new approach is more efficient, which leads to a significantly reduced total runtime. The proposed method is validated through successfully conducted simulations and experiments on our humanoid robot platform, LoLA.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122304162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035065
Dinmukhamed Zardykhan, Petr Svarný, M. Hoffmann, Erfan Shahriari, S. Haddadin
As robots are leaving dedicated areas on the factory floor and start to share workspaces with humans, safety of such collaboration becomes a major challenge. In this work, we propose new approaches to robot velocity modulation: While the robot is on a path prescribed by the task, it predicts possible collisions with the human and gradually slows down, proportionally to the danger of collision. Two principal approaches are developed-Impulse Orb and Prognosis Window-that dynamically determine the possible robot-induced collisions and apply a novel velocity modulating approach, in which the phase progress of the robot trajectory is modulated while the desired robot path remains intact. The methods guarantee that the robot will halt before contacting the human, but they are less conservative and more flexible than solutions using reduced speed and complete stop only, thereby increasing the effectiveness of human-robot collaboration. This approach is especially useful in constrained setups where the robot path is prescribed. Speed modulation is smooth and does not lead to abrupt motions, making the behavior of the robot also better understandable for the human counterpart. The two principal methods under different parameter settings are experimentally validated in a human-robot interaction scenario with the Franka Emika Panda robot, an external RGB-D camera, and human keypoint detection using OpenPose.
{"title":"Collision Preventing Phase-Progress Control for Velocity Adaptation in Human-Robot Collaboration","authors":"Dinmukhamed Zardykhan, Petr Svarný, M. Hoffmann, Erfan Shahriari, S. Haddadin","doi":"10.1109/Humanoids43949.2019.9035065","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035065","url":null,"abstract":"As robots are leaving dedicated areas on the factory floor and start to share workspaces with humans, safety of such collaboration becomes a major challenge. In this work, we propose new approaches to robot velocity modulation: While the robot is on a path prescribed by the task, it predicts possible collisions with the human and gradually slows down, proportionally to the danger of collision. Two principal approaches are developed-Impulse Orb and Prognosis Window-that dynamically determine the possible robot-induced collisions and apply a novel velocity modulating approach, in which the phase progress of the robot trajectory is modulated while the desired robot path remains intact. The methods guarantee that the robot will halt before contacting the human, but they are less conservative and more flexible than solutions using reduced speed and complete stop only, thereby increasing the effectiveness of human-robot collaboration. This approach is especially useful in constrained setups where the robot path is prescribed. Speed modulation is smooth and does not lead to abrupt motions, making the behavior of the robot also better understandable for the human counterpart. The two principal methods under different parameter settings are experimentally validated in a human-robot interaction scenario with the Franka Emika Panda robot, an external RGB-D camera, and human keypoint detection using OpenPose.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122049768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035075
Steven Crews, Sapan Agrawal, M. Travers
This work presents a novel architecture that unifies footstep planning, motion planning, and online feedback control for legged robots moving through complex environments. Our approach contrasts related prior works that treat planning and control as separate components in a hierarchical framework (first plan, then control). Though prior works have demonstrated success, existing state-of-the-art planning and control architectures for legged robots quickly become brittle in highly uncertain environments due to an inherent inability to dynamically and decisively react to unplanned events. To address this, this work presents a novel framework that uses modeling and analysis tools from the hybrid systems and nonlinear control communities to reformulate planning footsteps and dynamic trajectories as well as deriving closed-loop controllers as a single trajectory optimization problem. By combining these previously disparate steps we empirically show that we can remove much of complexity that underlies the hierarchical decision posed by conventional approaches, making it possible to dynamically and safely react to large external disturbances in sub-real-time. We present results that highlight the reactive and robust nature of the unified framework developed.
{"title":"Unified Foothold Selection and Motion Planning for Legged Systems in Real-Time","authors":"Steven Crews, Sapan Agrawal, M. Travers","doi":"10.1109/Humanoids43949.2019.9035075","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035075","url":null,"abstract":"This work presents a novel architecture that unifies footstep planning, motion planning, and online feedback control for legged robots moving through complex environments. Our approach contrasts related prior works that treat planning and control as separate components in a hierarchical framework (first plan, then control). Though prior works have demonstrated success, existing state-of-the-art planning and control architectures for legged robots quickly become brittle in highly uncertain environments due to an inherent inability to dynamically and decisively react to unplanned events. To address this, this work presents a novel framework that uses modeling and analysis tools from the hybrid systems and nonlinear control communities to reformulate planning footsteps and dynamic trajectories as well as deriving closed-loop controllers as a single trajectory optimization problem. By combining these previously disparate steps we empirically show that we can remove much of complexity that underlies the hierarchical decision posed by conventional approaches, making it possible to dynamically and safely react to large external disturbances in sub-real-time. We present results that highlight the reactive and robust nature of the unified framework developed.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130298549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although deep Reinforcement Learning (RL) has been successfully applied to a variety of tasks, manually designing appropriate reward functions for such complex tasks as robotic cloth manipulation still remains challenging and costly. In this paper, we explore an approach of Generative Adversarial Imitation Learning (GAIL) for robotic cloth manipulation tasks, which allows an agent to learn near-optimal behaviors from expert demonstration and self explorations without explicit reward function design. Based on the recent success of value-function based RL with the discrete action set for robotic cloth manipulation tasks [1], we develop a novel value-function based imitation learning framework, P-GAIL. P-GAIL employs a modified value-function based deep RL, Entropy-maximizing Deep P-Network, that can consider both the smoothness and causal entropy in policy update. After investigating its effectiveness through a toy problem in simulation, P-GAIL is applied to a dual-arm humanoid robot tasked with flipping a handkerchief and successfully learns a policy close to a human demonstration with limited exploration and demonstration. Experimental results suggest both fast and stable imitation learning ability and sample efficiency of P-GAIL in robotic cloth manipulation.
{"title":"Generative Adversarial Imitation Learning with Deep P-Network for Robotic Cloth Manipulation","authors":"Yoshihisa Tsurumine, Yunduan Cui, Kimitoshi Yamazaki, Takamitsu Matsubara","doi":"10.1109/Humanoids43949.2019.9034991","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034991","url":null,"abstract":"Although deep Reinforcement Learning (RL) has been successfully applied to a variety of tasks, manually designing appropriate reward functions for such complex tasks as robotic cloth manipulation still remains challenging and costly. In this paper, we explore an approach of Generative Adversarial Imitation Learning (GAIL) for robotic cloth manipulation tasks, which allows an agent to learn near-optimal behaviors from expert demonstration and self explorations without explicit reward function design. Based on the recent success of value-function based RL with the discrete action set for robotic cloth manipulation tasks [1], we develop a novel value-function based imitation learning framework, P-GAIL. P-GAIL employs a modified value-function based deep RL, Entropy-maximizing Deep P-Network, that can consider both the smoothness and causal entropy in policy update. After investigating its effectiveness through a toy problem in simulation, P-GAIL is applied to a dual-arm humanoid robot tasked with flipping a handkerchief and successfully learns a policy close to a human demonstration with limited exploration and demonstration. Experimental results suggest both fast and stable imitation learning ability and sample efficiency of P-GAIL in robotic cloth manipulation.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130597667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035062
A. C. Holgado, Nicola A. Piga, Tito Pradhono Tomo, G. Vezzani, A. Schmitz, L. Natale, S. Sugano
The humanoid robot iCub is currently equipped with an array of capacitive sensors that provide pressure information throughout the body of the robot. Even though for some applications this type of data is sufficient, it is not always the case for the fingertips of the robot. In particular, the current sensors do not provide enough information for performing agile manipulation, where both intensity and direction of the exerted force on the fingertips are relevant for the proper execution of the task. In this paper, we present a single 3-axis small magnetic sensor module and we show its effectiveness when integrated into the fingertips of the iCub. The sensor module is derived from uSkin, presented in previous works from our laboratory. Replaceable fingertips were designed, built and integrated via software into the low level communication network of the robot, providing fast 3D information about the contact between the fingertips and objects. Additionally, we present two experiments demonstrating tasks that would not be possible to perform with the current fingertip sensors.
{"title":"Magnetic 3-axis Soft and Sensitive Fingertip Sensors Integration for the iCub Humanoid Robot","authors":"A. C. Holgado, Nicola A. Piga, Tito Pradhono Tomo, G. Vezzani, A. Schmitz, L. Natale, S. Sugano","doi":"10.1109/Humanoids43949.2019.9035062","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035062","url":null,"abstract":"The humanoid robot iCub is currently equipped with an array of capacitive sensors that provide pressure information throughout the body of the robot. Even though for some applications this type of data is sufficient, it is not always the case for the fingertips of the robot. In particular, the current sensors do not provide enough information for performing agile manipulation, where both intensity and direction of the exerted force on the fingertips are relevant for the proper execution of the task. In this paper, we present a single 3-axis small magnetic sensor module and we show its effectiveness when integrated into the fingertips of the iCub. The sensor module is derived from uSkin, presented in previous works from our laboratory. Replaceable fingertips were designed, built and integrated via software into the low level communication network of the robot, providing fast 3D information about the contact between the fingertips and objects. Additionally, we present two experiments demonstrating tasks that would not be possible to perform with the current fingertip sensors.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124271343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/Humanoids43949.2019.9035031
S. Gaurav, Zainab Al-Qurashi, Amey Barapatre, George P Maratos, T. Sarma, Brian D. Ziebart
By projecting into a 3-D workspace, robotic teleoperation using virtual reality allows for a more intuitive method of control for the operator than using a 2-D view from the robot's visual sensors. This paper investigates a setup that places the teleoperator in a virtual representation of the robot's environment and develops a deep learning based architecture modeling the correspondence between the operator's movements in the virtual space and joint angles for a humanoid robot using data collected from a series of demonstrations. We evaluate the correspondence model's performance in a pick-and - place teleoperation experiment.
{"title":"Deep Correspondence Learning for Effective Robotic Teleoperation using Virtual Reality","authors":"S. Gaurav, Zainab Al-Qurashi, Amey Barapatre, George P Maratos, T. Sarma, Brian D. Ziebart","doi":"10.1109/Humanoids43949.2019.9035031","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035031","url":null,"abstract":"By projecting into a 3-D workspace, robotic teleoperation using virtual reality allows for a more intuitive method of control for the operator than using a 2-D view from the robot's visual sensors. This paper investigates a setup that places the teleoperator in a virtual representation of the robot's environment and develops a deep learning based architecture modeling the correspondence between the operator's movements in the virtual space and joint angles for a humanoid robot using data collected from a series of demonstrations. We evaluate the correspondence model's performance in a pick-and - place teleoperation experiment.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125477684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-27DOI: 10.1109/Humanoids43949.2019.9035073
Kevin Zhang, Mohit Sharma, M. Veloso, Oliver Kroemer
Cutting is a common form of manipulation when working with divisible objects such as food, rope, or clay. Cooking in particular relies heavily on cutting to divide food items into desired shapes. However, cutting food is a challenging task due to the wide range of material properties exhibited by food items. Due to this variability, the same cutting motions cannot be used for all food items. Sensations from contact events, e.g., when placing the knife on the food item, will also vary depending on the material properties, and the robot will need to adapt accordingly. In this paper, we propose using vibrations and force-torque feedback from the interactions to adapt the slicing motions and monitor for contact events. The robot learns neural networks for performing each of these tasks and generalizing across different material properties. By adapting and monitoring the skill executions, the robot is able to reliably cut through more than 20 different types of food items and even detect whether certain food items are fresh or old.
{"title":"Leveraging Multimodal Haptic Sensory Data for Robust Cutting","authors":"Kevin Zhang, Mohit Sharma, M. Veloso, Oliver Kroemer","doi":"10.1109/Humanoids43949.2019.9035073","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035073","url":null,"abstract":"Cutting is a common form of manipulation when working with divisible objects such as food, rope, or clay. Cooking in particular relies heavily on cutting to divide food items into desired shapes. However, cutting food is a challenging task due to the wide range of material properties exhibited by food items. Due to this variability, the same cutting motions cannot be used for all food items. Sensations from contact events, e.g., when placing the knife on the food item, will also vary depending on the material properties, and the robot will need to adapt accordingly. In this paper, we propose using vibrations and force-torque feedback from the interactions to adapt the slicing motions and monitor for contact events. The robot learns neural networks for performing each of these tasks and generalizing across different material properties. By adapting and monitoring the skill executions, the robot is able to reliably cut through more than 20 different types of food items and even detect whether certain food items are fresh or old.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114372726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-23DOI: 10.1109/Humanoids43949.2019.9034996
M. Shafiee, Giulio Romualdi, Stefano Dafarra, Francisco Javier Andrade Chavez, D. Pucci
We present a computationally efficient method for online planning of bipedal walking trajectories with push recovery. In particular, the proposed methodology fits control architectures where the Divergent-Component-of-Motion (DCM) is planned beforehand, and adds a step adapter to adjust the planned trajectories and achieve push recovery. Assuming that the robot is in a single support state, the step adapter generates new positions and timings for the next step. The step adapter is active in single support phases only, but the proposed torque-control architecture considers double support phases too. The key idea for the design of the step adapter is to impose both initial and final DCM step values using an exponential interpolation of the time varying ZMP trajectory. This allows us to cast the push recovery problem as a Quadratic Programming (QP) one, and to solve it online with state-of-the-art optimisers. The overall approach is validated with simulations of the torque-controlled 33 kg humanoid robot iCub. Results show that the proposed strategy prevents the humanoid robot from falling while walking at 0.28 m/s and pushed with external forces up to 150 Newton for 0.05 seconds.
{"title":"Online DCM Trajectory Generation for Push Recovery of Torque-Controlled Humanoid Robots","authors":"M. Shafiee, Giulio Romualdi, Stefano Dafarra, Francisco Javier Andrade Chavez, D. Pucci","doi":"10.1109/Humanoids43949.2019.9034996","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034996","url":null,"abstract":"We present a computationally efficient method for online planning of bipedal walking trajectories with push recovery. In particular, the proposed methodology fits control architectures where the Divergent-Component-of-Motion (DCM) is planned beforehand, and adds a step adapter to adjust the planned trajectories and achieve push recovery. Assuming that the robot is in a single support state, the step adapter generates new positions and timings for the next step. The step adapter is active in single support phases only, but the proposed torque-control architecture considers double support phases too. The key idea for the design of the step adapter is to impose both initial and final DCM step values using an exponential interpolation of the time varying ZMP trajectory. This allows us to cast the push recovery problem as a Quadratic Programming (QP) one, and to solve it online with state-of-the-art optimisers. The overall approach is validated with simulations of the torque-controlled 33 kg humanoid robot iCub. Results show that the proposed strategy prevents the humanoid robot from falling while walking at 0.28 m/s and pushed with external forces up to 150 Newton for 0.05 seconds.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134391045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}