Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246951
Robson Kenji Sato, T. Sugihara
Mismatch between a robot and its simplified COM-ZMP model happens as the latter does not consider the limits of kinematics. Conservative approaches restrain the robot mobility to avoid the kinematic constraints. The present work proposes the use of virtual leader-follower concept and a robust prioritized inverse kinematics solver over a walking controller based on COM-ZMP model to accomplish walking at the boundaries of workspace. Simulations with simplified dynamics showed the applicability of the method. In the case of whole-body dynamics included, a forward motion with stretched knees was achieved at the cost of longitudinal velocity. It is suggested that the proposed method is applicable to different controllers.
{"title":"Walking control for feasibility at limit of kinematics based on virtual leader-follower","authors":"Robson Kenji Sato, T. Sugihara","doi":"10.1109/HUMANOIDS.2017.8246951","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246951","url":null,"abstract":"Mismatch between a robot and its simplified COM-ZMP model happens as the latter does not consider the limits of kinematics. Conservative approaches restrain the robot mobility to avoid the kinematic constraints. The present work proposes the use of virtual leader-follower concept and a robust prioritized inverse kinematics solver over a walking controller based on COM-ZMP model to accomplish walking at the boundaries of workspace. Simulations with simplified dynamics showed the applicability of the method. In the case of whole-body dynamics included, a forward motion with stretched knees was achieved at the cost of longitudinal velocity. It is suggested that the proposed method is applicable to different controllers.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126827532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246963
Mengze Li, Zhaofan Yuan, T. Aoyama, Y. Hasegawa
When a paraplegic patient walks with the aid of an exoskeleton, the patient can only follow the preset walking trajectory and cannot receive any lower limb sensory feedback. This paper firstly proposes an index finger interface that allows a paraplegic patient to control the walking trajectory through an index finger voluntarily. The proposed interface consists of two rotatable links and one ring, and the interface is installed in the front of a crutch handle. The user controls the foot position of the foot position via a ring while keeping the balance with the crutch. On this basis, this paper also proposes an electrical stimulation feedback pattern that conveys the foot position to the user to assist the walking control. The electrical stimulation device contains 20 stimulation points that correspond to the spatial position. The electrical stimulus presents the foot position by switching the stimulation frequency and the stimulation position. For safety reasons, we conducted a preliminary study of the interface and feedback model with a healthy subject and a walking robot. We validated the effectiveness of the index finger interface and the electrical stimulation pattern through three experiments:1)pseudo-trajectory recognition under electrical stimulation feedback condition, 2) robot walking control under visual feedback condition, and 3) robot walking control under electrical stimulation feedback condition.
{"title":"Electrical stimulation feedback for gait control of walking simulator","authors":"Mengze Li, Zhaofan Yuan, T. Aoyama, Y. Hasegawa","doi":"10.1109/HUMANOIDS.2017.8246963","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246963","url":null,"abstract":"When a paraplegic patient walks with the aid of an exoskeleton, the patient can only follow the preset walking trajectory and cannot receive any lower limb sensory feedback. This paper firstly proposes an index finger interface that allows a paraplegic patient to control the walking trajectory through an index finger voluntarily. The proposed interface consists of two rotatable links and one ring, and the interface is installed in the front of a crutch handle. The user controls the foot position of the foot position via a ring while keeping the balance with the crutch. On this basis, this paper also proposes an electrical stimulation feedback pattern that conveys the foot position to the user to assist the walking control. The electrical stimulation device contains 20 stimulation points that correspond to the spatial position. The electrical stimulus presents the foot position by switching the stimulation frequency and the stimulation position. For safety reasons, we conducted a preliminary study of the interface and feedback model with a healthy subject and a walking robot. We validated the effectiveness of the index finger interface and the electrical stimulation pattern through three experiments:1)pseudo-trajectory recognition under electrical stimulation feedback condition, 2) robot walking control under visual feedback condition, and 3) robot walking control under electrical stimulation feedback condition.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"26 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126986831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246964
J. Felip, D. Gonzalez-Aguirre, Omesh Tickoo
The ability to predict the future location of objects is key for robots operating in unstructured and uncertain scenarios. It is even more important for general purpose humanoid robots that are meant to operate and adapt to multiple scenarios. They need to determine possible outcomes of actions, reason about their effect and plan subsequent movements accordingly to act preemptively. The prediction ability of current robotic systems in is far from that of humans. Neuroscience studies point out that humans have a predictive ability, called intuitive physics, to anticipate the behavior of dynamic environments enabling them to predict and take preemptive actions when necessary, for example to catch a flying ball or grab an object that is about to fall off a table. In this paper, we present a system that learns to predict based on previous observations. First, object's physical parameters are learned through observation using parameter search techniques. Second, the learned dynamic model of objects is used to generate probabilistic predictions through physics simulation. The parameter search update rules proposed, are compared to other approaches from the state-of-the-art in physical parameter learning. Finally, the predictive capability is evaluated through simulated and real experiments.
{"title":"Towards intuitive rigid-body physics through parameter search","authors":"J. Felip, D. Gonzalez-Aguirre, Omesh Tickoo","doi":"10.1109/HUMANOIDS.2017.8246964","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246964","url":null,"abstract":"The ability to predict the future location of objects is key for robots operating in unstructured and uncertain scenarios. It is even more important for general purpose humanoid robots that are meant to operate and adapt to multiple scenarios. They need to determine possible outcomes of actions, reason about their effect and plan subsequent movements accordingly to act preemptively. The prediction ability of current robotic systems in is far from that of humans. Neuroscience studies point out that humans have a predictive ability, called intuitive physics, to anticipate the behavior of dynamic environments enabling them to predict and take preemptive actions when necessary, for example to catch a flying ball or grab an object that is about to fall off a table. In this paper, we present a system that learns to predict based on previous observations. First, object's physical parameters are learned through observation using parameter search techniques. Second, the learned dynamic model of objects is used to generate probabilistic predictions through physics simulation. The parameter search update rules proposed, are compared to other approaches from the state-of-the-art in physical parameter learning. Finally, the predictive capability is evaluated through simulated and real experiments.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8239542
Mia Kokic, J. A. Stork, Joshua A. Haustein, D. Kragic
In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.
{"title":"Affordance detection for task-specific grasping using deep learning","authors":"Mia Kokic, J. A. Stork, Joshua A. Haustein, D. Kragic","doi":"10.1109/HUMANOIDS.2017.8239542","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8239542","url":null,"abstract":"In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246923
M. Devanne, S. Nguyen
Analyzing and understanding human motion is a major research problem widely investigated in the last decades in various application domains. In this work, we address the problem of human motion analysis in the context of kinaesthetic rehabilitation using a robot coach system which should be able to learn how to perform a rehabilitation exercise as well as assess patients' movements. For that purpose, human motion analysis is crucial. We develop a human motion analysis method for learning a probabilistic representation of ideal movements from expert demonstrations. A Gaussian Mixture Model is employed from position and orientation features captured using a Microsoft Kinect v2. For assessing patients” movements, we propose a real-time multi-level analysis to both temporally and spatially identify and explain body part errors. This allows the robot to provide coaching advice to make the patient improve his movements. The evaluation on three rehabilitation exercises shows the potential of the proposed approach for learning and assessing kinaesthetic movements.
{"title":"Multi-level motion analysis for physical exercises assessment in kinaesthetic rehabilitation","authors":"M. Devanne, S. Nguyen","doi":"10.1109/HUMANOIDS.2017.8246923","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246923","url":null,"abstract":"Analyzing and understanding human motion is a major research problem widely investigated in the last decades in various application domains. In this work, we address the problem of human motion analysis in the context of kinaesthetic rehabilitation using a robot coach system which should be able to learn how to perform a rehabilitation exercise as well as assess patients' movements. For that purpose, human motion analysis is crucial. We develop a human motion analysis method for learning a probabilistic representation of ideal movements from expert demonstrations. A Gaussian Mixture Model is employed from position and orientation features captured using a Microsoft Kinect v2. For assessing patients” movements, we propose a real-time multi-level analysis to both temporally and spatially identify and explain body part errors. This allows the robot to provide coaching advice to make the patient improve his movements. The evaluation on three rehabilitation exercises shows the potential of the proposed approach for learning and assessing kinaesthetic movements.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131375683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246926
Nicola Scianca, Valerio Modugno, L. Lanari, G. Oriolo
We consider the problem of generating a gait with no a priori assigned footsteps while taking into account the contribution of the swinging leg to the total Zero Moment Point (ZMP). This is achieved by considering a multi-mass model of the humanoid and distinguishing between secondary masses with known pre-defined motion and the remaining, primary, masses. In the case of a single primary mass with constant height, it is possible to transform the original gait generation problem for the multi-mass system into a single LIP-like problem. We can then take full advantage of an intrinsically stable MPC framework to generate a gait that takes into account the swinging leg motion.
{"title":"Gait generation via intrinsically stable MPC for a multi-mass humanoid model","authors":"Nicola Scianca, Valerio Modugno, L. Lanari, G. Oriolo","doi":"10.1109/HUMANOIDS.2017.8246926","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246926","url":null,"abstract":"We consider the problem of generating a gait with no a priori assigned footsteps while taking into account the contribution of the swinging leg to the total Zero Moment Point (ZMP). This is achieved by considering a multi-mass model of the humanoid and distinguishing between secondary masses with known pre-defined motion and the remaining, primary, masses. In the case of a single primary mass with constant height, it is possible to transform the original gait generation problem for the multi-mass system into a single LIP-like problem. We can then take full advantage of an intrinsically stable MPC framework to generate a gait that takes into account the swinging leg motion.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116515303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246873
Kevin Stein, Yue Hu, M. Kudruss, M. Naveau, K. Mombaur
The widely spread iCub humanoid robot has proved to be able to walk straight forward by means of an offline pattern generator, which did not allow for online adjustments and interactions. In this paper, we present a closed-loop control framework based on a Nonlinear Model Predictive Control pattern generator with feedback at the Center of Mass (CoM) position. This framework allows us to extend the walking capabilities of iCub to different walking directions, such as curved, sideways and backward walking. When compared to existing methods, the walking speed of iCub is increased by approximately 75% and the step period decreased by 45%. It was successfully tested with a reduced version of the iCub (HeiCub), but it was also shown to be applicable to the full iCub in simulation. The measured outcomes of the experiments are the walking velocity, the cost of transport, tracking precision of the Zero-Moment Point (ZMP), CoM and joint trajectories. The online feedback was shown to improve the walking stability by means of an improvement of the CoM tracking precision by 30% and the ZMP tracking precision by 60% compared to the same method without CoM position feedback control.
{"title":"Closed loop control of walking motions with adaptive choice of directions for the iCub humanoid robot","authors":"Kevin Stein, Yue Hu, M. Kudruss, M. Naveau, K. Mombaur","doi":"10.1109/HUMANOIDS.2017.8246873","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246873","url":null,"abstract":"The widely spread iCub humanoid robot has proved to be able to walk straight forward by means of an offline pattern generator, which did not allow for online adjustments and interactions. In this paper, we present a closed-loop control framework based on a Nonlinear Model Predictive Control pattern generator with feedback at the Center of Mass (CoM) position. This framework allows us to extend the walking capabilities of iCub to different walking directions, such as curved, sideways and backward walking. When compared to existing methods, the walking speed of iCub is increased by approximately 75% and the step period decreased by 45%. It was successfully tested with a reduced version of the iCub (HeiCub), but it was also shown to be applicable to the full iCub in simulation. The measured outcomes of the experiments are the walking velocity, the cost of transport, tracking precision of the Zero-Moment Point (ZMP), CoM and joint trajectories. The online feedback was shown to improve the walking stability by means of an improvement of the CoM tracking precision by 30% and the ZMP tracking precision by 60% compared to the same method without CoM position feedback control.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130729608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246967
Y. Kita, Y. Goi, Y. Kawai
A generally applicable method of calibrating robot coordinates and three-dimensional sensor coordinates without using a calibration tool is proposed. Most calibration methods require an elaborate calibration tool, such as a marker board with a black and white pattern. However, making an accurate calibration board and making observations of the board attached to/grasped by/touched by a robot hand are both troublesome tasks. Aiming at reducing the labor and time required for calibration and enabling on-line registration, we propose using the robot hand itself as a “calibration object” without making any special marks. Because the position and orientation of the hand as an end-effector are known in the robot coordinates, the calibration problem is replaced by the problem of registering the geometric model of the hand to observed three-dimensional data to determine its location in the vision coordinates. To robustly carry out the registration for variously shaped hands, only a planar part of the hand is used for the registration in the iterative closest-point framework. To realize least manual interaction, a two-step approach that consists of registration with manual interaction and accuracy improvement using multiple observations is taken. The practical usefulness of the proposed method was examined using four differently shaped hands.
{"title":"Robot and 3D-sensor calibration using a planar part of a robot hand","authors":"Y. Kita, Y. Goi, Y. Kawai","doi":"10.1109/HUMANOIDS.2017.8246967","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246967","url":null,"abstract":"A generally applicable method of calibrating robot coordinates and three-dimensional sensor coordinates without using a calibration tool is proposed. Most calibration methods require an elaborate calibration tool, such as a marker board with a black and white pattern. However, making an accurate calibration board and making observations of the board attached to/grasped by/touched by a robot hand are both troublesome tasks. Aiming at reducing the labor and time required for calibration and enabling on-line registration, we propose using the robot hand itself as a “calibration object” without making any special marks. Because the position and orientation of the hand as an end-effector are known in the robot coordinates, the calibration problem is replaced by the problem of registering the geometric model of the hand to observed three-dimensional data to determine its location in the vision coordinates. To robustly carry out the registration for variously shaped hands, only a planar part of the hand is used for the registration in the iterative closest-point framework. To realize least manual interaction, a two-step approach that consists of registration with manual interaction and accuracy improvement using multiple observations is taken. The practical usefulness of the proposed method was examined using four differently shaped hands.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134222564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246879
Nandan Banerjee, Erik Amaral, Benjamin Axelrod, Steven V. Shamlian, Mark Moseley
In this paper we address the problem of designing a consumer robot capable of manipulating objects typically present in a home. One reason for lack of consumer adoption of manipulator robots is that planning for grasps while negotiating obstacles is costly in terms of time, power, and computational resources. Also, robot arms are generally expensive, thus confining their usage to research labs and the industry. The contribution of this paper is twofold. First we present the hardware design of robot arms resulting in an order of magnitude reduction in cost over the state of the art. Second, we propose an efficient motion planning algorithm which is able to generate motion plans for grasping consistently within 1s everytime using heuristic initialization. We evaluate the algorithm on a challenging task of grasping objects in a cluttered home environment, using a proprietary physical system using two low-cost 7 DoF arms, 3 fingered underactuated hands, and a 1 DoF torso and neck on a holonomic drive base.
{"title":"Heuristically initialized motion planning in a low cost consumer robot","authors":"Nandan Banerjee, Erik Amaral, Benjamin Axelrod, Steven V. Shamlian, Mark Moseley","doi":"10.1109/HUMANOIDS.2017.8246879","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246879","url":null,"abstract":"In this paper we address the problem of designing a consumer robot capable of manipulating objects typically present in a home. One reason for lack of consumer adoption of manipulator robots is that planning for grasps while negotiating obstacles is costly in terms of time, power, and computational resources. Also, robot arms are generally expensive, thus confining their usage to research labs and the industry. The contribution of this paper is twofold. First we present the hardware design of robot arms resulting in an order of magnitude reduction in cost over the state of the art. Second, we propose an efficient motion planning algorithm which is able to generate motion plans for grasping consistently within 1s everytime using heuristic initialization. We evaluate the algorithm on a challenging task of grasping objects in a cluttered home environment, using a proprietary physical system using two low-cost 7 DoF arms, 3 fingered underactuated hands, and a 1 DoF torso and neck on a holonomic drive base.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114486473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246958
Valerio Modugno, Gabriele Nava, D. Pucci, F. Nori, G. Oriolo, S. Ivaldi
Multi-task prioritized controllers generate complex behaviors for humanoids that concurrently satisfy several tasks and constraints. In our previous work we automatically learned the task priorities that maximized the robot performance in whole-body reaching tasks, ensuring that the optimized priorities were leading to safe behaviors. Here, we take the opposite approach: we optimize the task trajectories for whole-body balancing tasks with switching contacts, ensuring that the optimized movements are safe and never violate any of the robot and problem constraints. We use (1+1)-CMA-ES with Constrained Covariance Adaptation as a constrained black box stochastic optimization algorithm, with an instance of (1+1)-CMA-ES for bootstrapping the search. We apply our learning framework to the prioritized whole-body torque controller of iCub, to optimize the robot's movement for standing up from a chair.
{"title":"Safe trajectory optimization for whole-body motion of humanoids","authors":"Valerio Modugno, Gabriele Nava, D. Pucci, F. Nori, G. Oriolo, S. Ivaldi","doi":"10.1109/HUMANOIDS.2017.8246958","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246958","url":null,"abstract":"Multi-task prioritized controllers generate complex behaviors for humanoids that concurrently satisfy several tasks and constraints. In our previous work we automatically learned the task priorities that maximized the robot performance in whole-body reaching tasks, ensuring that the optimized priorities were leading to safe behaviors. Here, we take the opposite approach: we optimize the task trajectories for whole-body balancing tasks with switching contacts, ensuring that the optimized movements are safe and never violate any of the robot and problem constraints. We use (1+1)-CMA-ES with Constrained Covariance Adaptation as a constrained black box stochastic optimization algorithm, with an instance of (1+1)-CMA-ES for bootstrapping the search. We apply our learning framework to the prioritized whole-body torque controller of iCub, to optimize the robot's movement for standing up from a chair.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134356689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}