首页 > 最新文献

2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)最新文献

英文 中文
Feedback design for multi-contact push recovery via LMI approximation of the Piecewise-Affine Quadratic Regulator 基于分段仿射二次型调节器LMI逼近的多触点推力恢复反馈设计
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246970
Weiqiao Han, Russ Tedrake
To recover from large perturbations, a legged robot must make and break contact with its environment at various locations. These contact switches make it natural to model the robot as a hybrid system. If we apply Model Predictive Control to the feedback design of this hybrid system, the on/off behavior of contacts can be directly encoded using binary variables in a Mixed Integer Programming problem, which scales badly with the number of time steps and is too slow for online computation. We propose novel techniques for the design of stabilizing controllers for such hybrid systems. We approximate the dynamics of the system as a discrete-time Piecewise Affine (PWA) system, and compute the state feedback controllers across the hybrid modes offline via Lyapunov theory. The Lyapunov stability conditions are translated into Linear Matrix Inequalities. A Piecewise Quadratic Lyapunov function together with a Piecewise Linear (PL) feedback controller can be obtained by Semidefinite Programming (SDP). We show that we can embed a quadratic objective in the SDP, designing a controller approximating the Piecewise-Affine Quadratic Regulator. Moreover, we observe that our formulation restricted to the linear system case appears to always produce exactly the unique stabilizing solution to the Discrete Algebraic Riccati Equation. In addition, we extend the search from the PL controller to the PWA controller via Bilinear Matrix Inequalities. Finally, we demonstrate and evaluate our methods on a few PWA systems, including a simplified humanoid robot model.
为了从大的扰动中恢复,一个有腿的机器人必须在不同的位置与环境建立或断开接触。这些触点开关使得将机器人建模为混合系统变得很自然。如果将模型预测控制应用于该混合系统的反馈设计中,那么在混合整数规划问题中,触点的开/关行为可以直接用二进制变量进行编码,但该问题随着时间步长的增加而恶化,并且在线计算速度太慢。我们提出了一种新的方法来设计这种混合系统的稳定控制器。我们将系统的动力学近似为离散时间分段仿射(PWA)系统,并通过李雅普诺夫理论计算了跨混合模式的状态反馈控制器。将李雅普诺夫稳定性条件转化为线性矩阵不等式。利用半定规划(SDP)可以得到分段二次Lyapunov函数和分段线性反馈控制器。我们证明我们可以在SDP中嵌入一个二次目标,设计一个近似分段仿射二次调节器的控制器。此外,我们观察到,我们的公式限制在线性系统的情况下,似乎总是产生唯一的稳定解的离散代数Riccati方程。此外,我们利用双线性矩阵不等式将搜索范围从PL控制器扩展到PWA控制器。最后,我们在几个PWA系统上演示和评估了我们的方法,包括一个简化的人形机器人模型。
{"title":"Feedback design for multi-contact push recovery via LMI approximation of the Piecewise-Affine Quadratic Regulator","authors":"Weiqiao Han, Russ Tedrake","doi":"10.1109/HUMANOIDS.2017.8246970","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246970","url":null,"abstract":"To recover from large perturbations, a legged robot must make and break contact with its environment at various locations. These contact switches make it natural to model the robot as a hybrid system. If we apply Model Predictive Control to the feedback design of this hybrid system, the on/off behavior of contacts can be directly encoded using binary variables in a Mixed Integer Programming problem, which scales badly with the number of time steps and is too slow for online computation. We propose novel techniques for the design of stabilizing controllers for such hybrid systems. We approximate the dynamics of the system as a discrete-time Piecewise Affine (PWA) system, and compute the state feedback controllers across the hybrid modes offline via Lyapunov theory. The Lyapunov stability conditions are translated into Linear Matrix Inequalities. A Piecewise Quadratic Lyapunov function together with a Piecewise Linear (PL) feedback controller can be obtained by Semidefinite Programming (SDP). We show that we can embed a quadratic objective in the SDP, designing a controller approximating the Piecewise-Affine Quadratic Regulator. Moreover, we observe that our formulation restricted to the linear system case appears to always produce exactly the unique stabilizing solution to the Discrete Algebraic Riccati Equation. In addition, we extend the search from the PL controller to the PWA controller via Bilinear Matrix Inequalities. Finally, we demonstrate and evaluate our methods on a few PWA systems, including a simplified humanoid robot model.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123387021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Robots learning from robots: A proof of concept study for co-manipulation tasks 机器人向机器人学习:协同操作任务的概念验证研究
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246916
L. Peternel, A. Ajoudani
In this paper we study the concept of robots learning from collaboration with skilled robots. The advantage of this concept is that the human involvement is reduced, while the skill can be propagated faster among the robots performing similar collaborative tasks or the ones being executed in hostile environments. The expert robot initially obtains the skill through the observation of, and physical collaboration with the human. We present a novel approach to how a novice robot can learn the specifics of the co-manipulation task from the physical interaction with an expert robot. The method consists of a multi-stage learning process that can gradually learn the appropriate motion and impedance behaviour under given task conditions. The trajectories are encoded with Dynamical Movement Primitives and learnt by Locally Weighted Regression, while their phase is estimated by adaptive oscillators. The learnt trajectories are replicated by a hybrid force/impedance controller. To validate the proposed approach we performed experiments on two robots learning and executing a challenging co-manipulation task.
本文研究了机器人与熟练机器人协作学习的概念。这个概念的优点是减少了人类的参与,而技能可以在执行类似协作任务的机器人或在敌对环境中执行的机器人之间更快地传播。专家机器人最初是通过对人的观察,以及与人的物理协作来获得技能的。我们提出了一种新颖的方法,使新手机器人能够从与专家机器人的物理交互中学习协同操作任务的细节。该方法由一个多阶段学习过程组成,可以在给定的任务条件下逐步学习适当的运动和阻抗行为。轨迹用动态运动基元编码,通过局部加权回归学习,相位由自适应振子估计。学习到的轨迹由混合力/阻抗控制器复制。为了验证所提出的方法,我们在两个机器人上进行了学习和执行具有挑战性的协同操作任务的实验。
{"title":"Robots learning from robots: A proof of concept study for co-manipulation tasks","authors":"L. Peternel, A. Ajoudani","doi":"10.1109/HUMANOIDS.2017.8246916","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246916","url":null,"abstract":"In this paper we study the concept of robots learning from collaboration with skilled robots. The advantage of this concept is that the human involvement is reduced, while the skill can be propagated faster among the robots performing similar collaborative tasks or the ones being executed in hostile environments. The expert robot initially obtains the skill through the observation of, and physical collaboration with the human. We present a novel approach to how a novice robot can learn the specifics of the co-manipulation task from the physical interaction with an expert robot. The method consists of a multi-stage learning process that can gradually learn the appropriate motion and impedance behaviour under given task conditions. The trajectories are encoded with Dynamical Movement Primitives and learnt by Locally Weighted Regression, while their phase is estimated by adaptive oscillators. The learnt trajectories are replicated by a hybrid force/impedance controller. To validate the proposed approach we performed experiments on two robots learning and executing a challenging co-manipulation task.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123528063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Compositional autonomy for humanoid robots with risk-aware decision-making 具有风险感知决策的类人机器人组合自主
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246927
X. Long, P. Long, T. Padır
This paper lays the foundations of risk-aware decision-making within the context of compositional robot autonomy for humanoid robots. In a nutshell, the idea is to compose task-level autonomous robot behaviors into a holistic motion plan by selecting a sequence of actions from a feasible action set. In doing so, we establish a total risk function to evaluate and assign a risk value to individual robot actions which then can be used to find the total risk of executing a plan. As a result, various actions can be composed into a complete autonomous motion plan while the robot is being cognizant to risks associated with executing one composition over another. In order to illustrate the concept, we introduce two specific risk measures, namely, the collision risk and the fall risk. We demonstrate the results from this foundational study of risk-aware compositional robot autonomy in simulation using NASA's Valkyrie humanoid robot.
本文为类人机器人组合机器人自主环境下的风险意识决策奠定了基础。简而言之,这个想法是通过从一个可行的动作集中选择一系列动作,将任务级自主机器人的行为组合成一个整体的运动计划。在此过程中,我们建立了一个总风险函数来评估和分配单个机器人动作的风险值,然后可以用来找到执行计划的总风险。因此,不同的动作可以组成一个完整的自主运动计划,而机器人正在认识到执行一个组合而不是另一个组合的风险。为了说明这一概念,我们介绍了两个具体的风险度量,即碰撞风险和坠落风险。我们使用NASA的Valkyrie人形机器人在仿真中展示了风险感知合成机器人自主性的基础研究结果。
{"title":"Compositional autonomy for humanoid robots with risk-aware decision-making","authors":"X. Long, P. Long, T. Padır","doi":"10.1109/HUMANOIDS.2017.8246927","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246927","url":null,"abstract":"This paper lays the foundations of risk-aware decision-making within the context of compositional robot autonomy for humanoid robots. In a nutshell, the idea is to compose task-level autonomous robot behaviors into a holistic motion plan by selecting a sequence of actions from a feasible action set. In doing so, we establish a total risk function to evaluate and assign a risk value to individual robot actions which then can be used to find the total risk of executing a plan. As a result, various actions can be composed into a complete autonomous motion plan while the robot is being cognizant to risks associated with executing one composition over another. In order to illustrate the concept, we introduce two specific risk measures, namely, the collision risk and the fall risk. We demonstrate the results from this foundational study of risk-aware compositional robot autonomy in simulation using NASA's Valkyrie humanoid robot.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124983379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gaze and filled pause detection for smooth human-robot conversations 凝视和填充暂停检测平滑的人机对话
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246889
Miriam Bilac, Marine Chamoux, Angelica Lim
Let the human speak! Interactive robots and voice interfaces such as Pepper, Amazon Alexa, and OK Google are becoming more and more popular, allowing for more natural interaction compared to screens or keyboards. One issue with voice interfaces is that they tend to require a “robotic” flow of human speech. Humans must be careful to not produce disfluencies, such as hesitations or extended pauses between words. If they do, the agent may assume that the human has finished their speech turn, and interrupts them mid-thought. Interactive robots often rely on the same limited dialogue technology built for speech interfaces. Yet humanoid robots have the potential to also use their vision systems to determine when the human has finished their speaking turn. In this paper, we introduce HOMAGE (Human-rObot Multimodal Audio and Gaze End-of-turn), a multimodal turntaking system for conversational humanoid robots. We created a dataset of humans spontaneously hesitating when responding to a robot's open-ended questions such as, “What was your favorite moment this year?”. Our analyses found that users produced both auditory filled pauses such as “uhhh”, as well as gaze away from the robot to keep their speaking turn. We then trained a machine learning system to detect the auditory filled pauses and integrated it along with gaze into the Pepper humanoid robot's real-time dialog system. Experiments with 28 naive users revealed that adding auditory filled pause detection and gaze tracking significantly reduced robot interruptions. Furthermore, user turns were 2.1 times longer (without repetitions), suggesting that this strategy allows humans to express themselves more, toward less time pressure and better robot listeners.
让人类说话吧!交互式机器人和语音界面(如Pepper、Amazon Alexa和OK Google)正变得越来越受欢迎,与屏幕或键盘相比,它们允许更自然的交互。语音界面的一个问题是,它们往往需要一种“机器人式”的人类语言流。人们必须注意不要产生不流畅,比如单词之间的犹豫或长时间停顿。如果他们这样做,代理可能会认为人类已经完成了他们的演讲,并打断他们的思考。交互式机器人通常依赖于为语音界面构建的同样有限的对话技术。然而,人形机器人也有可能利用它们的视觉系统来确定人类何时完成了他们的讲话。在本文中,我们介绍了HOMAGE (Human-rObot Multimodal Audio and Gaze end -turn),这是一个用于会话类人机器人的多模态轮转系统。我们创建了一个数据集,记录了人类在回答机器人提出的开放式问题时的自发犹豫,比如“你今年最喜欢的时刻是什么?”我们的分析发现,用户既会发出“啊”这样充满听觉的停顿,也会把目光从机器人身上移开,以保持说话的顺序。然后,我们训练了一个机器学习系统来检测充满听觉的停顿,并将其与凝视整合到Pepper人形机器人的实时对话系统中。对28名天真用户的实验表明,添加听觉填充暂停检测和凝视跟踪显著减少了机器人的干扰。此外,用户的回合数增加了2.1倍(没有重复),这表明这种策略可以让人类更多地表达自己,减少时间压力,让机器人更好地倾听。
{"title":"Gaze and filled pause detection for smooth human-robot conversations","authors":"Miriam Bilac, Marine Chamoux, Angelica Lim","doi":"10.1109/HUMANOIDS.2017.8246889","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246889","url":null,"abstract":"Let the human speak! Interactive robots and voice interfaces such as Pepper, Amazon Alexa, and OK Google are becoming more and more popular, allowing for more natural interaction compared to screens or keyboards. One issue with voice interfaces is that they tend to require a “robotic” flow of human speech. Humans must be careful to not produce disfluencies, such as hesitations or extended pauses between words. If they do, the agent may assume that the human has finished their speech turn, and interrupts them mid-thought. Interactive robots often rely on the same limited dialogue technology built for speech interfaces. Yet humanoid robots have the potential to also use their vision systems to determine when the human has finished their speaking turn. In this paper, we introduce HOMAGE (Human-rObot Multimodal Audio and Gaze End-of-turn), a multimodal turntaking system for conversational humanoid robots. We created a dataset of humans spontaneously hesitating when responding to a robot's open-ended questions such as, “What was your favorite moment this year?”. Our analyses found that users produced both auditory filled pauses such as “uhhh”, as well as gaze away from the robot to keep their speaking turn. We then trained a machine learning system to detect the auditory filled pauses and integrated it along with gaze into the Pepper humanoid robot's real-time dialog system. Experiments with 28 naive users revealed that adding auditory filled pause detection and gaze tracking significantly reduced robot interruptions. Furthermore, user turns were 2.1 times longer (without repetitions), suggesting that this strategy allows humans to express themselves more, toward less time pressure and better robot listeners.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Deep visual perception for dynamic walking on discrete terrain 离散地形上动态行走的深度视觉感知
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246907
Avinash Siravuru, Allan Wang, Quan Nguyen, K. Sreenath
Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of [45 : 85] centimeters.
在离散地形上的动态双足行走,就像踏脚石一样,是一个具有挑战性的问题,需要反馈控制器来强制执行安全关键约束。为了在现实世界的实验中实施这些约束,需要快速准确的感知来检测和估计立足点。在这项工作中,设计了一个深度视觉感知模型来准确估计下一步的步长,作为反馈控制器的输入,使视觉在环动态行走在离散地形上。特别是,设计和训练了自定义卷积神经网络架构,以使用脚撞击时即将到来的地形的采样图像预览来预测到下一个立足点的步长。视觉输入只在每一步的开始提供,并且被证明足以完成动态踩到离散立足点的工作。通过广泛的数值研究,我们表明机器人能够在一个离散的地形上成功地自主行走超过100步,并且在步长范围内随机定位为[45:85]厘米。
{"title":"Deep visual perception for dynamic walking on discrete terrain","authors":"Avinash Siravuru, Allan Wang, Quan Nguyen, K. Sreenath","doi":"10.1109/HUMANOIDS.2017.8246907","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246907","url":null,"abstract":"Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of [45 : 85] centimeters.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Footwear discrimination using dynamic tactile information 利用动态触觉信息识别鞋类
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246886
A. Drimus, Vedran Mikov
This paper shows that it is possible to differentiate among various type of footwear solely by using highly dimensional pressure information provided by a sensorised insole. In order to achieve this, a person equipped with two sensorised insoles streaming real-time tactile data to a computer performs normal walking patterns. The sampled data is further transformed and reduced to sets of time series which are used for the classification of footwear. The pressure sensor is formed as a footwear inlay and is based on piezoresistive rubber having 1024 tactile cells providing normal pressure information in the form of a tactile image. The data is transmitted in realtime wirelessly at 30 fps from two such sensors. The online classification is using the dynamic time warping distances for different extracted features to assess the most similar type of footwear based on time series similarities. The paper shows that various footwear types yield distinct tactile patterns which can be assessed by the proposed algorithm.
这篇论文表明,它是有可能区分不同类型的鞋类仅通过使用高尺寸的压力信息提供了一个传感鞋垫。为了实现这一目标,一个装有两个感应鞋垫的人将实时触觉数据传输给计算机,以执行正常的行走模式。将采样数据进一步转换并简化为用于鞋类分类的时间序列集。该压力传感器形成为鞋类镶嵌物,并且基于具有1024个触觉单元的压阻性橡胶,以触觉图像的形式提供正常的压力信息。数据以每秒30帧的速度从两个这样的传感器实时无线传输。在线分类是利用不同提取特征的动态时间翘曲距离,基于时间序列相似性来评估最相似的鞋类类型。本文表明,不同的鞋类类型产生不同的触觉模式,可以通过所提出的算法进行评估。
{"title":"Footwear discrimination using dynamic tactile information","authors":"A. Drimus, Vedran Mikov","doi":"10.1109/HUMANOIDS.2017.8246886","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246886","url":null,"abstract":"This paper shows that it is possible to differentiate among various type of footwear solely by using highly dimensional pressure information provided by a sensorised insole. In order to achieve this, a person equipped with two sensorised insoles streaming real-time tactile data to a computer performs normal walking patterns. The sampled data is further transformed and reduced to sets of time series which are used for the classification of footwear. The pressure sensor is formed as a footwear inlay and is based on piezoresistive rubber having 1024 tactile cells providing normal pressure information in the form of a tactile image. The data is transmitted in realtime wirelessly at 30 fps from two such sensors. The online classification is using the dynamic time warping distances for different extracted features to assess the most similar type of footwear based on time series similarities. The paper shows that various footwear types yield distinct tactile patterns which can be assessed by the proposed algorithm.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115491986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sensing device simulating human buttock for the validation of robotic devices for nursing care 模拟人体臀部的传感装置,用于验证机器人护理设备
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246960
Kunihiro Ogata, I. Kajitani, K. Homma, Y. Matsumoto
Robotic devices for nursing care are expected to help caregivers work with the elderly. Some robotic devices assist in the physical transfer of the elderly, and these robots come in contact with large surfaces of the human body. The regions of the buttock and the back may be uncomfortable due to these robotic devices. Therefore, sensing devices simulating a human buttock were developed to quantify and evaluate the load of a human body objectively. This buttock dummy consists of simulated bone and soft tissues, which include muscle, fat and skin. These regions have multi-axis force sensors to enable the quantification of the load due to the robotic devices used for nursing care. On measuring the soft exterior, it was found that the stiffness of the buttock dummy was similar to the human buttock. The comfort of a robotic bed was measured using the buttock dummy, and it was found that the shear force increased due to the deformation of the robotic bed. Thus, it was proven that the buttock dummy was capable of measuring the load of the human body when being used with robotic devices for nursing care.
用于护理的机器人设备有望帮助护理人员照顾老年人。一些机器人设备帮助老年人进行身体转移,这些机器人与人体的大表面接触。由于这些机器人装置,臀部和背部可能会感到不舒服。因此,模拟人体臀部的传感装置被开发出来,以客观地量化和评估人体的负荷。这个臀部假人由模拟的骨骼和软组织组成,包括肌肉、脂肪和皮肤。这些区域有多轴力传感器,可以量化由于用于护理的机器人设备所产生的负载。通过对臀部假人柔软外形的测量,发现其刚度与人体臀部相似。利用臀部假人对机器人床的舒适性进行了测试,发现由于机器人床的变形导致了剪切力的增加。因此,证明了臀部假人在与机器人设备配合使用进行护理时,能够测量人体的负荷。
{"title":"Sensing device simulating human buttock for the validation of robotic devices for nursing care","authors":"Kunihiro Ogata, I. Kajitani, K. Homma, Y. Matsumoto","doi":"10.1109/HUMANOIDS.2017.8246960","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246960","url":null,"abstract":"Robotic devices for nursing care are expected to help caregivers work with the elderly. Some robotic devices assist in the physical transfer of the elderly, and these robots come in contact with large surfaces of the human body. The regions of the buttock and the back may be uncomfortable due to these robotic devices. Therefore, sensing devices simulating a human buttock were developed to quantify and evaluate the load of a human body objectively. This buttock dummy consists of simulated bone and soft tissues, which include muscle, fat and skin. These regions have multi-axis force sensors to enable the quantification of the load due to the robotic devices used for nursing care. On measuring the soft exterior, it was found that the stiffness of the buttock dummy was similar to the human buttock. The comfort of a robotic bed was measured using the buttock dummy, and it was found that the shear force increased due to the deformation of the robotic bed. Thus, it was proven that the buttock dummy was capable of measuring the load of the human body when being used with robotic devices for nursing care.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
NimbRo-OP2: Grown-up 3D printed open humanoid platform for research NimbRo-OP2:成熟的3D打印开放人形研究平台
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246944
Grzegorz Ficht, Philipp Allgeuer, Hafez Farazi, Sven Behnke
The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.
人形机器人在运动、全身运动、与未修改的人类环境的交互以及直观的人机交互方面的多功能性引起了越来越多的研究兴趣。有多个小型平台可用于研究,但这些平台需要一个小型化的环境来进行交互,而且通常小型机器人会减少影响大型机器人的因素的影响。不幸的是,许多规模较大的研究平台价格较低,更难以操作、维护和修改,而且往往是闭源的。在这项工作中,我们介绍NimbRo-OP2,这是一个价格合理的,在硬件和软件方面完全开源的平台。身高近135厘米,体重只有18公斤,机器人不仅能够在人类的环境中互动,而且操作简单安全,不需要龙门架。机器人的外骨骼是3D打印的,这产生了一个轻量级和视觉上吸引人的设计。我们展示了机器人的所有机械和电气方面,以及我们完善的开源ROS软件的一些软件功能。NimbRo-OP2在日本名古屋举行的2017年机器人世界杯上进行了表演,并获得了人形联赛成人足球比赛和技术挑战赛的冠军。
{"title":"NimbRo-OP2: Grown-up 3D printed open humanoid platform for research","authors":"Grzegorz Ficht, Philipp Allgeuer, Hafez Farazi, Sven Behnke","doi":"10.1109/HUMANOIDS.2017.8246944","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246944","url":null,"abstract":"The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127404439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Online estimation of friction constraints for multi-contact whole body control 多接触全身控制摩擦约束的在线估计
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246896
Cameron P. Ridgewell, Robert J. Griffin, T. Furukawa, B. Lattimer
This paper proposes a technique for experimentally approximating surface friction coefficients at contacttime in multi-contact applications. Unlike other multi-contact formulations, our approach does not assume a standard friction coefficient, and instead induces slip in a multi-contact oriented humanoid to estimate available friction force. Incrementally increased tangential force, measured with ankle-mounted force-torque sensors, is used as the basis for slip detection and friction coefficient estimation at the hand. This technique is validated in simulation on a simple three-link model and extended to the humanoid robot platform ESCHER. Approximated friction values are utilized by the robot's whole body controller to prevent multi-contact end effector slip.
本文提出了一种实验逼近多接触应用中接触时表面摩擦系数的方法。与其他多接触公式不同,我们的方法不假设标准摩擦系数,而是在多接触定向人形中诱导滑移以估计可用的摩擦力。通过安装在脚踝上的力-扭矩传感器测量逐渐增加的切向力,作为手部滑移检测和摩擦系数估计的基础。该方法在简单的三连杆模型上进行了仿真验证,并推广到仿人机器人平台ESCHER上。机器人的全身控制器利用近似的摩擦值来防止多接触末端执行器的滑移。
{"title":"Online estimation of friction constraints for multi-contact whole body control","authors":"Cameron P. Ridgewell, Robert J. Griffin, T. Furukawa, B. Lattimer","doi":"10.1109/HUMANOIDS.2017.8246896","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246896","url":null,"abstract":"This paper proposes a technique for experimentally approximating surface friction coefficients at contacttime in multi-contact applications. Unlike other multi-contact formulations, our approach does not assume a standard friction coefficient, and instead induces slip in a multi-contact oriented humanoid to estimate available friction force. Incrementally increased tangential force, measured with ankle-mounted force-torque sensors, is used as the basis for slip detection and friction coefficient estimation at the hand. This technique is validated in simulation on a simple three-link model and extended to the humanoid robot platform ESCHER. Approximated friction values are utilized by the robot's whole body controller to prevent multi-contact end effector slip.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129072002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real-time evolutionary model predictive control using a graphics processing unit 使用图形处理单元的实时进化模型预测控制
Pub Date : 2017-11-01 DOI: 10.1109/HUMANOIDS.2017.8246929
Phillip Hyatt, Marc D. Killpack
With humanoid robots becoming more complex and operating in un-modeled or human environments, there is a growing need for control methods that are scalable and robust, while still maintaining compliance for safety reasons. Model Predictive Control (MPC) is an optimal control method which has proven robust to modeling error and disturbances. However, it can be difficult to implement for high degree of freedom (DoF) systems due to the optimization problem that must be solved. While evolutionary algorithms have proven effective for complex large-scale optimization problems, they have not been formulated to find solutions quickly enough for use with MPC. This work details the implementation of a parallelized evolutionary MPC (EMPC) algorithm which is able to run in real-time through the use of a Graphics Processing Unit (GPU). This parallelization is accomplished by simulating candidate control input trajectories in parallel on the GPU. We show that this framework is more flexible in terms of cost function definition than traditional MPC and that it shows promise for finding solutions for high DoF systems.
随着人形机器人变得越来越复杂,并在未建模或人类环境中操作,对可扩展和健壮的控制方法的需求越来越大,同时仍然出于安全原因保持合规性。模型预测控制(MPC)是一种对建模误差和干扰具有鲁棒性的最优控制方法。然而,由于必须解决的优化问题,对于高自由度系统来说,这种方法很难实现。虽然进化算法已被证明对复杂的大规模优化问题是有效的,但它们还不能快速找到MPC的解决方案。这项工作详细介绍了并行进化MPC (EMPC)算法的实现,该算法能够通过使用图形处理单元(GPU)实时运行。这种并行化是通过在GPU上并行模拟候选控制输入轨迹来实现的。我们表明,该框架在成本函数定义方面比传统的MPC更灵活,并且它显示了为高自由度系统寻找解决方案的希望。
{"title":"Real-time evolutionary model predictive control using a graphics processing unit","authors":"Phillip Hyatt, Marc D. Killpack","doi":"10.1109/HUMANOIDS.2017.8246929","DOIUrl":"https://doi.org/10.1109/HUMANOIDS.2017.8246929","url":null,"abstract":"With humanoid robots becoming more complex and operating in un-modeled or human environments, there is a growing need for control methods that are scalable and robust, while still maintaining compliance for safety reasons. Model Predictive Control (MPC) is an optimal control method which has proven robust to modeling error and disturbances. However, it can be difficult to implement for high degree of freedom (DoF) systems due to the optimization problem that must be solved. While evolutionary algorithms have proven effective for complex large-scale optimization problems, they have not been formulated to find solutions quickly enough for use with MPC. This work details the implementation of a parallelized evolutionary MPC (EMPC) algorithm which is able to run in real-time through the use of a Graphics Processing Unit (GPU). This parallelization is accomplished by simulating candidate control input trajectories in parallel on the GPU. We show that this framework is more flexible in terms of cost function definition than traditional MPC and that it shows promise for finding solutions for high DoF systems.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129795740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1