Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246970
Weiqiao Han, Russ Tedrake
To recover from large perturbations, a legged robot must make and break contact with its environment at various locations. These contact switches make it natural to model the robot as a hybrid system. If we apply Model Predictive Control to the feedback design of this hybrid system, the on/off behavior of contacts can be directly encoded using binary variables in a Mixed Integer Programming problem, which scales badly with the number of time steps and is too slow for online computation. We propose novel techniques for the design of stabilizing controllers for such hybrid systems. We approximate the dynamics of the system as a discrete-time Piecewise Affine (PWA) system, and compute the state feedback controllers across the hybrid modes offline via Lyapunov theory. The Lyapunov stability conditions are translated into Linear Matrix Inequalities. A Piecewise Quadratic Lyapunov function together with a Piecewise Linear (PL) feedback controller can be obtained by Semidefinite Programming (SDP). We show that we can embed a quadratic objective in the SDP, designing a controller approximating the Piecewise-Affine Quadratic Regulator. Moreover, we observe that our formulation restricted to the linear system case appears to always produce exactly the unique stabilizing solution to the Discrete Algebraic Riccati Equation. In addition, we extend the search from the PL controller to the PWA controller via Bilinear Matrix Inequalities. Finally, we demonstrate and evaluate our methods on a few PWA systems, including a simplified humanoid robot model.
{"title":"Feedback design for multi-contact push recovery via LMI approximation of the Piecewise-Affine Quadratic Regulator","authors":"Weiqiao Han, Russ Tedrake","doi":"10.1109/HUMANOIDS.2017.8246970","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246970","url":null,"abstract":"To recover from large perturbations, a legged robot must make and break contact with its environment at various locations. These contact switches make it natural to model the robot as a hybrid system. If we apply Model Predictive Control to the feedback design of this hybrid system, the on/off behavior of contacts can be directly encoded using binary variables in a Mixed Integer Programming problem, which scales badly with the number of time steps and is too slow for online computation. We propose novel techniques for the design of stabilizing controllers for such hybrid systems. We approximate the dynamics of the system as a discrete-time Piecewise Affine (PWA) system, and compute the state feedback controllers across the hybrid modes offline via Lyapunov theory. The Lyapunov stability conditions are translated into Linear Matrix Inequalities. A Piecewise Quadratic Lyapunov function together with a Piecewise Linear (PL) feedback controller can be obtained by Semidefinite Programming (SDP). We show that we can embed a quadratic objective in the SDP, designing a controller approximating the Piecewise-Affine Quadratic Regulator. Moreover, we observe that our formulation restricted to the linear system case appears to always produce exactly the unique stabilizing solution to the Discrete Algebraic Riccati Equation. In addition, we extend the search from the PL controller to the PWA controller via Bilinear Matrix Inequalities. Finally, we demonstrate and evaluate our methods on a few PWA systems, including a simplified humanoid robot model.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123387021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246916
L. Peternel, A. Ajoudani
In this paper we study the concept of robots learning from collaboration with skilled robots. The advantage of this concept is that the human involvement is reduced, while the skill can be propagated faster among the robots performing similar collaborative tasks or the ones being executed in hostile environments. The expert robot initially obtains the skill through the observation of, and physical collaboration with the human. We present a novel approach to how a novice robot can learn the specifics of the co-manipulation task from the physical interaction with an expert robot. The method consists of a multi-stage learning process that can gradually learn the appropriate motion and impedance behaviour under given task conditions. The trajectories are encoded with Dynamical Movement Primitives and learnt by Locally Weighted Regression, while their phase is estimated by adaptive oscillators. The learnt trajectories are replicated by a hybrid force/impedance controller. To validate the proposed approach we performed experiments on two robots learning and executing a challenging co-manipulation task.
{"title":"Robots learning from robots: A proof of concept study for co-manipulation tasks","authors":"L. Peternel, A. Ajoudani","doi":"10.1109/HUMANOIDS.2017.8246916","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246916","url":null,"abstract":"In this paper we study the concept of robots learning from collaboration with skilled robots. The advantage of this concept is that the human involvement is reduced, while the skill can be propagated faster among the robots performing similar collaborative tasks or the ones being executed in hostile environments. The expert robot initially obtains the skill through the observation of, and physical collaboration with the human. We present a novel approach to how a novice robot can learn the specifics of the co-manipulation task from the physical interaction with an expert robot. The method consists of a multi-stage learning process that can gradually learn the appropriate motion and impedance behaviour under given task conditions. The trajectories are encoded with Dynamical Movement Primitives and learnt by Locally Weighted Regression, while their phase is estimated by adaptive oscillators. The learnt trajectories are replicated by a hybrid force/impedance controller. To validate the proposed approach we performed experiments on two robots learning and executing a challenging co-manipulation task.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123528063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246927
X. Long, P. Long, T. Padır
This paper lays the foundations of risk-aware decision-making within the context of compositional robot autonomy for humanoid robots. In a nutshell, the idea is to compose task-level autonomous robot behaviors into a holistic motion plan by selecting a sequence of actions from a feasible action set. In doing so, we establish a total risk function to evaluate and assign a risk value to individual robot actions which then can be used to find the total risk of executing a plan. As a result, various actions can be composed into a complete autonomous motion plan while the robot is being cognizant to risks associated with executing one composition over another. In order to illustrate the concept, we introduce two specific risk measures, namely, the collision risk and the fall risk. We demonstrate the results from this foundational study of risk-aware compositional robot autonomy in simulation using NASA's Valkyrie humanoid robot.
{"title":"Compositional autonomy for humanoid robots with risk-aware decision-making","authors":"X. Long, P. Long, T. Padır","doi":"10.1109/HUMANOIDS.2017.8246927","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246927","url":null,"abstract":"This paper lays the foundations of risk-aware decision-making within the context of compositional robot autonomy for humanoid robots. In a nutshell, the idea is to compose task-level autonomous robot behaviors into a holistic motion plan by selecting a sequence of actions from a feasible action set. In doing so, we establish a total risk function to evaluate and assign a risk value to individual robot actions which then can be used to find the total risk of executing a plan. As a result, various actions can be composed into a complete autonomous motion plan while the robot is being cognizant to risks associated with executing one composition over another. In order to illustrate the concept, we introduce two specific risk measures, namely, the collision risk and the fall risk. We demonstrate the results from this foundational study of risk-aware compositional robot autonomy in simulation using NASA's Valkyrie humanoid robot.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124983379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246889
Miriam Bilac, Marine Chamoux, Angelica Lim
Let the human speak! Interactive robots and voice interfaces such as Pepper, Amazon Alexa, and OK Google are becoming more and more popular, allowing for more natural interaction compared to screens or keyboards. One issue with voice interfaces is that they tend to require a “robotic” flow of human speech. Humans must be careful to not produce disfluencies, such as hesitations or extended pauses between words. If they do, the agent may assume that the human has finished their speech turn, and interrupts them mid-thought. Interactive robots often rely on the same limited dialogue technology built for speech interfaces. Yet humanoid robots have the potential to also use their vision systems to determine when the human has finished their speaking turn. In this paper, we introduce HOMAGE (Human-rObot Multimodal Audio and Gaze End-of-turn), a multimodal turntaking system for conversational humanoid robots. We created a dataset of humans spontaneously hesitating when responding to a robot's open-ended questions such as, “What was your favorite moment this year?”. Our analyses found that users produced both auditory filled pauses such as “uhhh”, as well as gaze away from the robot to keep their speaking turn. We then trained a machine learning system to detect the auditory filled pauses and integrated it along with gaze into the Pepper humanoid robot's real-time dialog system. Experiments with 28 naive users revealed that adding auditory filled pause detection and gaze tracking significantly reduced robot interruptions. Furthermore, user turns were 2.1 times longer (without repetitions), suggesting that this strategy allows humans to express themselves more, toward less time pressure and better robot listeners.
让人类说话吧!交互式机器人和语音界面(如Pepper、Amazon Alexa和OK Google)正变得越来越受欢迎,与屏幕或键盘相比,它们允许更自然的交互。语音界面的一个问题是,它们往往需要一种“机器人式”的人类语言流。人们必须注意不要产生不流畅,比如单词之间的犹豫或长时间停顿。如果他们这样做,代理可能会认为人类已经完成了他们的演讲,并打断他们的思考。交互式机器人通常依赖于为语音界面构建的同样有限的对话技术。然而,人形机器人也有可能利用它们的视觉系统来确定人类何时完成了他们的讲话。在本文中,我们介绍了HOMAGE (Human-rObot Multimodal Audio and Gaze end -turn),这是一个用于会话类人机器人的多模态轮转系统。我们创建了一个数据集,记录了人类在回答机器人提出的开放式问题时的自发犹豫,比如“你今年最喜欢的时刻是什么?”我们的分析发现,用户既会发出“啊”这样充满听觉的停顿,也会把目光从机器人身上移开,以保持说话的顺序。然后,我们训练了一个机器学习系统来检测充满听觉的停顿,并将其与凝视整合到Pepper人形机器人的实时对话系统中。对28名天真用户的实验表明,添加听觉填充暂停检测和凝视跟踪显著减少了机器人的干扰。此外,用户的回合数增加了2.1倍(没有重复),这表明这种策略可以让人类更多地表达自己,减少时间压力,让机器人更好地倾听。
{"title":"Gaze and filled pause detection for smooth human-robot conversations","authors":"Miriam Bilac, Marine Chamoux, Angelica Lim","doi":"10.1109/HUMANOIDS.2017.8246889","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246889","url":null,"abstract":"Let the human speak! Interactive robots and voice interfaces such as Pepper, Amazon Alexa, and OK Google are becoming more and more popular, allowing for more natural interaction compared to screens or keyboards. One issue with voice interfaces is that they tend to require a “robotic” flow of human speech. Humans must be careful to not produce disfluencies, such as hesitations or extended pauses between words. If they do, the agent may assume that the human has finished their speech turn, and interrupts them mid-thought. Interactive robots often rely on the same limited dialogue technology built for speech interfaces. Yet humanoid robots have the potential to also use their vision systems to determine when the human has finished their speaking turn. In this paper, we introduce HOMAGE (Human-rObot Multimodal Audio and Gaze End-of-turn), a multimodal turntaking system for conversational humanoid robots. We created a dataset of humans spontaneously hesitating when responding to a robot's open-ended questions such as, “What was your favorite moment this year?”. Our analyses found that users produced both auditory filled pauses such as “uhhh”, as well as gaze away from the robot to keep their speaking turn. We then trained a machine learning system to detect the auditory filled pauses and integrated it along with gaze into the Pepper humanoid robot's real-time dialog system. Experiments with 28 naive users revealed that adding auditory filled pause detection and gaze tracking significantly reduced robot interruptions. Furthermore, user turns were 2.1 times longer (without repetitions), suggesting that this strategy allows humans to express themselves more, toward less time pressure and better robot listeners.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246907
Avinash Siravuru, Allan Wang, Quan Nguyen, K. Sreenath
Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of [45 : 85] centimeters.
{"title":"Deep visual perception for dynamic walking on discrete terrain","authors":"Avinash Siravuru, Allan Wang, Quan Nguyen, K. Sreenath","doi":"10.1109/HUMANOIDS.2017.8246907","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246907","url":null,"abstract":"Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of [45 : 85] centimeters.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246886
A. Drimus, Vedran Mikov
This paper shows that it is possible to differentiate among various type of footwear solely by using highly dimensional pressure information provided by a sensorised insole. In order to achieve this, a person equipped with two sensorised insoles streaming real-time tactile data to a computer performs normal walking patterns. The sampled data is further transformed and reduced to sets of time series which are used for the classification of footwear. The pressure sensor is formed as a footwear inlay and is based on piezoresistive rubber having 1024 tactile cells providing normal pressure information in the form of a tactile image. The data is transmitted in realtime wirelessly at 30 fps from two such sensors. The online classification is using the dynamic time warping distances for different extracted features to assess the most similar type of footwear based on time series similarities. The paper shows that various footwear types yield distinct tactile patterns which can be assessed by the proposed algorithm.
{"title":"Footwear discrimination using dynamic tactile information","authors":"A. Drimus, Vedran Mikov","doi":"10.1109/HUMANOIDS.2017.8246886","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246886","url":null,"abstract":"This paper shows that it is possible to differentiate among various type of footwear solely by using highly dimensional pressure information provided by a sensorised insole. In order to achieve this, a person equipped with two sensorised insoles streaming real-time tactile data to a computer performs normal walking patterns. The sampled data is further transformed and reduced to sets of time series which are used for the classification of footwear. The pressure sensor is formed as a footwear inlay and is based on piezoresistive rubber having 1024 tactile cells providing normal pressure information in the form of a tactile image. The data is transmitted in realtime wirelessly at 30 fps from two such sensors. The online classification is using the dynamic time warping distances for different extracted features to assess the most similar type of footwear based on time series similarities. The paper shows that various footwear types yield distinct tactile patterns which can be assessed by the proposed algorithm.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115491986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246960
Kunihiro Ogata, I. Kajitani, K. Homma, Y. Matsumoto
Robotic devices for nursing care are expected to help caregivers work with the elderly. Some robotic devices assist in the physical transfer of the elderly, and these robots come in contact with large surfaces of the human body. The regions of the buttock and the back may be uncomfortable due to these robotic devices. Therefore, sensing devices simulating a human buttock were developed to quantify and evaluate the load of a human body objectively. This buttock dummy consists of simulated bone and soft tissues, which include muscle, fat and skin. These regions have multi-axis force sensors to enable the quantification of the load due to the robotic devices used for nursing care. On measuring the soft exterior, it was found that the stiffness of the buttock dummy was similar to the human buttock. The comfort of a robotic bed was measured using the buttock dummy, and it was found that the shear force increased due to the deformation of the robotic bed. Thus, it was proven that the buttock dummy was capable of measuring the load of the human body when being used with robotic devices for nursing care.
{"title":"Sensing device simulating human buttock for the validation of robotic devices for nursing care","authors":"Kunihiro Ogata, I. Kajitani, K. Homma, Y. Matsumoto","doi":"10.1109/HUMANOIDS.2017.8246960","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246960","url":null,"abstract":"Robotic devices for nursing care are expected to help caregivers work with the elderly. Some robotic devices assist in the physical transfer of the elderly, and these robots come in contact with large surfaces of the human body. The regions of the buttock and the back may be uncomfortable due to these robotic devices. Therefore, sensing devices simulating a human buttock were developed to quantify and evaluate the load of a human body objectively. This buttock dummy consists of simulated bone and soft tissues, which include muscle, fat and skin. These regions have multi-axis force sensors to enable the quantification of the load due to the robotic devices used for nursing care. On measuring the soft exterior, it was found that the stiffness of the buttock dummy was similar to the human buttock. The comfort of a robotic bed was measured using the buttock dummy, and it was found that the shear force increased due to the deformation of the robotic bed. Thus, it was proven that the buttock dummy was capable of measuring the load of the human body when being used with robotic devices for nursing care.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246944
Grzegorz Ficht, Philipp Allgeuer, Hafez Farazi, Sven Behnke
The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.
{"title":"NimbRo-OP2: Grown-up 3D printed open humanoid platform for research","authors":"Grzegorz Ficht, Philipp Allgeuer, Hafez Farazi, Sven Behnke","doi":"10.1109/HUMANOIDS.2017.8246944","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246944","url":null,"abstract":"The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127404439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246896
Cameron P. Ridgewell, Robert J. Griffin, T. Furukawa, B. Lattimer
This paper proposes a technique for experimentally approximating surface friction coefficients at contacttime in multi-contact applications. Unlike other multi-contact formulations, our approach does not assume a standard friction coefficient, and instead induces slip in a multi-contact oriented humanoid to estimate available friction force. Incrementally increased tangential force, measured with ankle-mounted force-torque sensors, is used as the basis for slip detection and friction coefficient estimation at the hand. This technique is validated in simulation on a simple three-link model and extended to the humanoid robot platform ESCHER. Approximated friction values are utilized by the robot's whole body controller to prevent multi-contact end effector slip.
{"title":"Online estimation of friction constraints for multi-contact whole body control","authors":"Cameron P. Ridgewell, Robert J. Griffin, T. Furukawa, B. Lattimer","doi":"10.1109/HUMANOIDS.2017.8246896","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246896","url":null,"abstract":"This paper proposes a technique for experimentally approximating surface friction coefficients at contacttime in multi-contact applications. Unlike other multi-contact formulations, our approach does not assume a standard friction coefficient, and instead induces slip in a multi-contact oriented humanoid to estimate available friction force. Incrementally increased tangential force, measured with ankle-mounted force-torque sensors, is used as the basis for slip detection and friction coefficient estimation at the hand. This technique is validated in simulation on a simple three-link model and extended to the humanoid robot platform ESCHER. Approximated friction values are utilized by the robot's whole body controller to prevent multi-contact end effector slip.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129072002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/HUMANOIDS.2017.8246929
Phillip Hyatt, Marc D. Killpack
With humanoid robots becoming more complex and operating in un-modeled or human environments, there is a growing need for control methods that are scalable and robust, while still maintaining compliance for safety reasons. Model Predictive Control (MPC) is an optimal control method which has proven robust to modeling error and disturbances. However, it can be difficult to implement for high degree of freedom (DoF) systems due to the optimization problem that must be solved. While evolutionary algorithms have proven effective for complex large-scale optimization problems, they have not been formulated to find solutions quickly enough for use with MPC. This work details the implementation of a parallelized evolutionary MPC (EMPC) algorithm which is able to run in real-time through the use of a Graphics Processing Unit (GPU). This parallelization is accomplished by simulating candidate control input trajectories in parallel on the GPU. We show that this framework is more flexible in terms of cost function definition than traditional MPC and that it shows promise for finding solutions for high DoF systems.
{"title":"Real-time evolutionary model predictive control using a graphics processing unit","authors":"Phillip Hyatt, Marc D. Killpack","doi":"10.1109/HUMANOIDS.2017.8246929","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246929","url":null,"abstract":"With humanoid robots becoming more complex and operating in un-modeled or human environments, there is a growing need for control methods that are scalable and robust, while still maintaining compliance for safety reasons. Model Predictive Control (MPC) is an optimal control method which has proven robust to modeling error and disturbances. However, it can be difficult to implement for high degree of freedom (DoF) systems due to the optimization problem that must be solved. While evolutionary algorithms have proven effective for complex large-scale optimization problems, they have not been formulated to find solutions quickly enough for use with MPC. This work details the implementation of a parallelized evolutionary MPC (EMPC) algorithm which is able to run in real-time through the use of a Graphics Processing Unit (GPU). This parallelization is accomplished by simulating candidate control input trajectories in parallel on the GPU. We show that this framework is more flexible in terms of cost function definition than traditional MPC and that it shows promise for finding solutions for high DoF systems.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129795740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}