首页 > 最新文献

2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)最新文献

英文 中文
Low-Latency Immersive 6D Televisualization with Spherical Rendering 低延迟沉浸式6D电视可视化球面渲染
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555797
M. Schwarz, Sven Behnke
We present a method for real-time stereo scene capture and remote VR visualization that allows a human operator to freely move their head and thus intuitively control their perspective during teleoperation. The stereo camera is mounted on a 6D robotic arm, which follows the operator’s head pose. Existing VR teleoperation systems either induce high latencies on head movements, leading to motion sickness, or use scene reconstruction methods to allow re-rendering of the scene from different perspectives, which cannot handle dynamic scenes effectively. Instead, we present a decoupled approach which renders captured camera images as spheres, assuming constant distance. This allows very fast re-rendering on head pose changes while keeping the resulting temporary distortions during head translations small. We present qualitative examples, quantitative results in the form of lab experiments and a small user study, showing that our method outperforms other visualization methods.
我们提出了一种实时立体场景捕获和远程VR可视化的方法,该方法允许人类操作员在远程操作过程中自由移动他们的头部,从而直观地控制他们的视角。立体摄像机安装在一个6D机械臂上,它跟随操作员的头部姿势。现有的VR远程操作系统要么会引起头部运动的高延迟,导致晕动病,要么使用场景重建方法允许从不同角度重新渲染场景,这无法有效地处理动态场景。相反,我们提出了一种解耦方法,将捕获的相机图像呈现为球体,假设距离恒定。这允许非常快速地重新渲染头部姿势的变化,同时保持在头部翻译期间产生的临时扭曲小。我们以实验室实验和小型用户研究的形式提供定性示例,定量结果,表明我们的方法优于其他可视化方法。
{"title":"Low-Latency Immersive 6D Televisualization with Spherical Rendering","authors":"M. Schwarz, Sven Behnke","doi":"10.1109/HUMANOIDS47582.2021.9555797","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555797","url":null,"abstract":"We present a method for real-time stereo scene capture and remote VR visualization that allows a human operator to freely move their head and thus intuitively control their perspective during teleoperation. The stereo camera is mounted on a 6D robotic arm, which follows the operator’s head pose. Existing VR teleoperation systems either induce high latencies on head movements, leading to motion sickness, or use scene reconstruction methods to allow re-rendering of the scene from different perspectives, which cannot handle dynamic scenes effectively. Instead, we present a decoupled approach which renders captured camera images as spheres, assuming constant distance. This allows very fast re-rendering on head pose changes while keeping the resulting temporary distortions during head translations small. We present qualitative examples, quantitative results in the form of lab experiments and a small user study, showing that our method outperforms other visualization methods.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"11 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128377037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Design and Development of a Flying Humanoid Robot Platform with Bi-copter Flight Unit 带双旋翼飞行单元的仿人机器人飞行平台的设计与研制
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555801
T. Anzai, Yuta Kojio, Tasuku Makabe, K. Okada, M. Inaba
In this paper, we propose a novel flying humanoid robot platform with bi-copter flight unit. Humanoid robots have the ability to move by walking, but it is not sufficient for some tasks. To enhance the mobility of humanoid robots, we apply aerial robotics and develop a flying humanoid robot capable of walking and flying in the air. We describe the modeling and control of bi-copter and takeoff pose generation method for flying humanoid robot. We show the hardware implementations of a bi-copter flight unit, a humanoid robot and the whole system of flying humanoid robot. We perform several experiments to verify the effectiveness of the flight control, extended mobility and the implemented robot system including perception.
本文提出了一种具有双旋翼飞行单元的新型仿人机器人飞行平台。人形机器人具有通过行走移动的能力,但对于某些任务来说,这还不够。为了提高仿人机器人的机动性,我们应用航空机器人技术,开发了一种能够在空中行走和飞行的飞行仿人机器人。介绍了双旋翼飞行器的建模与控制,以及飞行类人机器人的起飞姿态生成方法。我们展示了双旋翼飞行单元、仿人机器人和整个仿人机器人飞行系统的硬件实现。我们进行了几个实验来验证飞行控制,扩展机动性和实现的机器人系统包括感知的有效性。
{"title":"Design and Development of a Flying Humanoid Robot Platform with Bi-copter Flight Unit","authors":"T. Anzai, Yuta Kojio, Tasuku Makabe, K. Okada, M. Inaba","doi":"10.1109/HUMANOIDS47582.2021.9555801","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555801","url":null,"abstract":"In this paper, we propose a novel flying humanoid robot platform with bi-copter flight unit. Humanoid robots have the ability to move by walking, but it is not sufficient for some tasks. To enhance the mobility of humanoid robots, we apply aerial robotics and develop a flying humanoid robot capable of walking and flying in the air. We describe the modeling and control of bi-copter and takeoff pose generation method for flying humanoid robot. We show the hardware implementations of a bi-copter flight unit, a humanoid robot and the whole system of flying humanoid robot. We perform several experiments to verify the effectiveness of the flight control, extended mobility and the implemented robot system including perception.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Feature-based Deep Learning of Proprioceptive Models for Robotic Force Estimation 基于特征的本体感觉模型深度学习用于机器人力估计
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555682
Erik Berger, Alexander Uhlig
Safe and meaningful interaction with robotic systems during behavior execution requires accurate sensing capabilities. This can be achieved by the usage of force-torque sensors which are often heavy, expensive, and require an additional power supply. Consequently, providing accurate sensing capabilities to lightweight robots, with a limited amount of load, is a challenging task. Furthermore, such sensors are not able to distinguish between task-specific regular forces and external influences as induced by human co-workers. To solve this, robots often rely on a large number of manually generated rules which is a time-consuming procedure. This paper presents a data-driven machine learning approach that enhances robotic behavior with estimates of the expected proprioceptive forces (intrinsic) and unexpected forces (extrinsic) exerted by the environment. First, the robot’s common internal sensors are recorded together with ground truth measurements of the actual forces during regular and perturbed behavior executions. The resulting data is used to generate features that contain a compact representation of behavior-specific intrinsic and extrinsic fluctuations. Those features are then utilized for deep learning of proprioceptive models which enables a robot to accurately distinguish the amount of intrinsic and extrinsic forces. Experiments performed with the UR5 robot show a substantial improvement in accuracy over force values provided by previous research.
在行为执行过程中与机器人系统进行安全和有意义的交互需要精确的感知能力。这可以通过使用力-扭矩传感器来实现,这些传感器通常很重,很昂贵,并且需要额外的电源。因此,为负载有限的轻型机器人提供精确的传感能力是一项具有挑战性的任务。此外,这种传感器无法区分特定任务的常规力量和由人类同事引起的外部影响。为了解决这个问题,机器人通常依赖于大量人工生成的规则,这是一个耗时的过程。本文提出了一种数据驱动的机器学习方法,该方法通过估计环境施加的预期本体感觉力(内在)和意外力(外在)来增强机器人行为。首先,记录机器人的常见内部传感器以及在正常和受干扰行为执行过程中实际力的地面真实测量值。结果数据用于生成包含特定行为的内在和外在波动的紧凑表示的特征。然后将这些特征用于本体感觉模型的深度学习,使机器人能够准确区分内力和外力的量。用UR5机器人进行的实验表明,与以前的研究提供的力值相比,精度有了实质性的提高。
{"title":"Feature-based Deep Learning of Proprioceptive Models for Robotic Force Estimation","authors":"Erik Berger, Alexander Uhlig","doi":"10.1109/HUMANOIDS47582.2021.9555682","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555682","url":null,"abstract":"Safe and meaningful interaction with robotic systems during behavior execution requires accurate sensing capabilities. This can be achieved by the usage of force-torque sensors which are often heavy, expensive, and require an additional power supply. Consequently, providing accurate sensing capabilities to lightweight robots, with a limited amount of load, is a challenging task. Furthermore, such sensors are not able to distinguish between task-specific regular forces and external influences as induced by human co-workers. To solve this, robots often rely on a large number of manually generated rules which is a time-consuming procedure. This paper presents a data-driven machine learning approach that enhances robotic behavior with estimates of the expected proprioceptive forces (intrinsic) and unexpected forces (extrinsic) exerted by the environment. First, the robot’s common internal sensors are recorded together with ground truth measurements of the actual forces during regular and perturbed behavior executions. The resulting data is used to generate features that contain a compact representation of behavior-specific intrinsic and extrinsic fluctuations. Those features are then utilized for deep learning of proprioceptive models which enables a robot to accurately distinguish the amount of intrinsic and extrinsic forces. Experiments performed with the UR5 robot show a substantial improvement in accuracy over force values provided by previous research.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123826325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pseudo Direct and Inverse Optimal Control based on Motion Synthesis using FPCA 基于FPCA运动合成的伪正逆最优控制
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555773
Soya Shimizu, K. Ayusawa, G. Venture
This paper presents a method to estimate cost weights of cost functions and multiple joint motion time-series values of humanoid robots easily, using functional principal component analysis (FPCA) instead of direct optimal control (DOC) and inverse optimal control (IOC). Each given object’s cost weight exemplar can be converted into a point in the FPC space by applying FPCA. Cost weight and the FPC space enable to synthesize the motion model data and the cost function factor and therefore versatile motion data conveniently. The proposed method surpasses classic DOC and IOC methods in terms of calculation time and efficiency, in novel data analysis. Furthermore, proposed method is applied to the humanoid robot HRP4, to generate arm motions, as an experimental proof of concept with some cost functions. The accuracy of the motion generation is experimentally confirmed.
本文提出了一种利用功能主成分分析(FPCA)代替直接最优控制(DOC)和逆最优控制(IOC)的方法,方便地估计仿人机器人成本函数的代价权值和多个关节运动时间序列值。每个给定对象的成本权重样例可以通过应用FPCA转换为FPC空间中的一个点。成本权值和FPC空间可以方便地综合运动模型数据和成本函数因子,从而实现多用途运动数据。该方法在计算时间和效率方面优于经典的DOC和IOC方法,具有新颖的数据分析能力。此外,将该方法应用于人形机器人HRP4,生成手臂运动,作为带有一些代价函数的概念实验证明。实验验证了运动生成的准确性。
{"title":"Pseudo Direct and Inverse Optimal Control based on Motion Synthesis using FPCA","authors":"Soya Shimizu, K. Ayusawa, G. Venture","doi":"10.1109/HUMANOIDS47582.2021.9555773","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555773","url":null,"abstract":"This paper presents a method to estimate cost weights of cost functions and multiple joint motion time-series values of humanoid robots easily, using functional principal component analysis (FPCA) instead of direct optimal control (DOC) and inverse optimal control (IOC). Each given object’s cost weight exemplar can be converted into a point in the FPC space by applying FPCA. Cost weight and the FPC space enable to synthesize the motion model data and the cost function factor and therefore versatile motion data conveniently. The proposed method surpasses classic DOC and IOC methods in terms of calculation time and efficiency, in novel data analysis. Furthermore, proposed method is applied to the humanoid robot HRP4, to generate arm motions, as an experimental proof of concept with some cost functions. The accuracy of the motion generation is experimentally confirmed.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128147390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Space to Earth – Relative-CoM-to-Foot (RCF) control yields high contact robustness 从空间到地球-相对com到脚(RCF)控制产生高接触鲁棒性
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555804
Johannes Englsberger, A. Giordano, Achraf Hiddane, R. Schuller, F. Loeffl, George Mesesan, C. Ott
This paper introduces the Simplest Articulated Free-Floating (SAFF) model, a low-dimensional model facilitating the examination of controllers, which are designed for free-floating robots that are subject to gravity. Two different state-of-the-art control approaches, namely absolute CoM control accompanied by an assumption about the foot acceleration, and a controller combining absolute CoM and foot control objectives, are shown to yield exponential stability in the nominal case, while becoming unstable if the foot contact is lost. As an improvement over the state of the art, the so-called Relative-CoM-to-Foot (RCF) controller is introduced, which again yields exponential stability nominally, while preserving a BIBO stable behavior even in case of a complete contact loss. The controller performance is validated in various simulations.
本文介绍了最简单的铰接自由漂浮(SAFF)模型,这是一种便于对受重力作用的自由漂浮机器人控制器进行检验的低维模型。两种不同的最先进的控制方法,即伴随足部加速度假设的绝对CoM控制,以及将绝对CoM和足部控制目标结合起来的控制器,在名义情况下显示出指数稳定性,而如果失去足部接触则变得不稳定。作为对目前技术水平的改进,引入了所谓的相对com -to- foot (RCF)控制器,它在名义上再次产生指数稳定性,同时即使在完全触点丢失的情况下也能保持BIBO稳定行为。通过各种仿真验证了控制器的性能。
{"title":"From Space to Earth – Relative-CoM-to-Foot (RCF) control yields high contact robustness","authors":"Johannes Englsberger, A. Giordano, Achraf Hiddane, R. Schuller, F. Loeffl, George Mesesan, C. Ott","doi":"10.1109/HUMANOIDS47582.2021.9555804","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555804","url":null,"abstract":"This paper introduces the Simplest Articulated Free-Floating (SAFF) model, a low-dimensional model facilitating the examination of controllers, which are designed for free-floating robots that are subject to gravity. Two different state-of-the-art control approaches, namely absolute CoM control accompanied by an assumption about the foot acceleration, and a controller combining absolute CoM and foot control objectives, are shown to yield exponential stability in the nominal case, while becoming unstable if the foot contact is lost. As an improvement over the state of the art, the so-called Relative-CoM-to-Foot (RCF) controller is introduced, which again yields exponential stability nominally, while preserving a BIBO stable behavior even in case of a complete contact loss. The controller performance is validated in various simulations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127578012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Task and Motion Planning using Policy Improvement with Path Integrals 结合任务和运动规划使用策略改进与路径积分
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555684
Dominik Urbaniak, Alejandro Agostini, Dongheui Lee
Task and motion planning deals with complex tasks that require a robot to automatically define and execute multi-step sequences of actions in cluttered scenarios. In this context, a linear motion is often not sufficient to approach a target object since collisions of the gripper with other objects or the target object might occur. Thus, motion planners should be able to generate collision-free trajectories for every particular configuration of obstacles for grounding the symbolic actions of the task plan. Current approaches either search for feasible motions offline using computationally expensive trial-and-error processes on physically realistic simulations or learn a set of motion parameters for particular object configuration spaces with little generalization. This work proposes an appealing alternative by efficiently generating trajectories for the collision-free execution of symbolic actions in variable scenarios without the need of intensive offline simulations. Our approach combines the benefit of learning from demonstration, to quickly generate an initial set of motion parameters for each symbolic action, with policy improvement with path integrals, to diversify this initial set of parameters to cope with different obstacle configurations. We show how the improved flexibility is achieved after a few minutes of training and successfully solves tasks requiring different sequences of picking and placing actions in variable configurations of obstacles.
任务和运动规划处理复杂的任务,这些任务需要机器人在混乱的场景中自动定义和执行多步骤的动作序列。在这种情况下,线性运动通常不足以接近目标物体,因为抓手可能会与其他物体或目标物体发生碰撞。因此,运动规划者应该能够为每一种特殊的障碍物配置生成无碰撞轨迹,以使任务计划的象征性动作接地。目前的方法要么在物理逼真的模拟中使用计算昂贵的试错过程来离线搜索可行的运动,要么在很少泛化的情况下学习一组特定物体构型空间的运动参数。这项工作提出了一个有吸引力的替代方案,即在不需要密集的离线模拟的情况下,在可变场景中有效地生成无碰撞执行符号动作的轨迹。我们的方法结合了从演示中学习的优点,为每个符号动作快速生成初始运动参数集,并通过路径积分改进策略,使初始参数集多样化,以应对不同的障碍配置。我们展示了如何在几分钟的训练后实现改进的灵活性,并成功地解决了需要在不同的障碍物配置中选择和放置动作的不同序列的任务。
{"title":"Combining Task and Motion Planning using Policy Improvement with Path Integrals","authors":"Dominik Urbaniak, Alejandro Agostini, Dongheui Lee","doi":"10.1109/HUMANOIDS47582.2021.9555684","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555684","url":null,"abstract":"Task and motion planning deals with complex tasks that require a robot to automatically define and execute multi-step sequences of actions in cluttered scenarios. In this context, a linear motion is often not sufficient to approach a target object since collisions of the gripper with other objects or the target object might occur. Thus, motion planners should be able to generate collision-free trajectories for every particular configuration of obstacles for grounding the symbolic actions of the task plan. Current approaches either search for feasible motions offline using computationally expensive trial-and-error processes on physically realistic simulations or learn a set of motion parameters for particular object configuration spaces with little generalization. This work proposes an appealing alternative by efficiently generating trajectories for the collision-free execution of symbolic actions in variable scenarios without the need of intensive offline simulations. Our approach combines the benefit of learning from demonstration, to quickly generate an initial set of motion parameters for each symbolic action, with policy improvement with path integrals, to diversify this initial set of parameters to cope with different obstacle configurations. We show how the improved flexibility is achieved after a few minutes of training and successfully solves tasks requiring different sequences of picking and placing actions in variable configurations of obstacles.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132450363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of Amphibious Humanoid for Behavior Acquisition on Land and Underwater 陆地和水下行为采集两栖类机器人的研制
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555671
Tasuku Makabe, T. Anzai, Youhei Kakiuchi, K. Okada, M. Inaba
Humanoid research aimed at verifying human movements and substituting for work deals with movement acquisition in various environments to which humans can adapt. On the other hand, water exists as an environment in which humanoids cannot adapt, even though humans adapt and demand work alternatives. Therefore, there is room to construct a platform that can operate underwater and the ground and verify the method of acquiring operation underwater. This study constructs the humanoid that can operate in both land and water environments using modular components that can easily change the body structure. While changing the environment’s force, such as frictional force, water resistance, and buoyancy, the humanoid performed moving movements such as swimming and walking in multiple lands and water settings. Walking motion experiments have shown that the underwater environment’s viscosity effectively reduces the speed of falls and prevents damage in humanoid experiments. We investigated a solution to the problem that humanoids are vulnerable to disturbance in an environment where friction is brutal to obtain through swimming.
类人研究旨在验证人类运动和替代工作,涉及人类可以适应的各种环境中的运动获取。另一方面,水作为一个类人无法适应的环境而存在,即使人类适应并要求替代工作。因此,有空间构建一个可以在水下和地面操作的平台,并验证水下获取操作的方法。本研究利用可轻易改变身体结构的模块化组件,构建了可在陆地和水环境中操作的类人机器人。在改变环境的力量,如摩擦力、水阻力和浮力的同时,人形机器人在多个陆地和水中环境中进行游泳和行走等运动。行走运动实验表明,在仿人实验中,水下环境的黏性能有效降低跌倒速度,防止损伤。我们研究了一个解决方案的问题,即人形机器人容易受到干扰的环境中,摩擦是残酷的通过游泳获得。
{"title":"Development of Amphibious Humanoid for Behavior Acquisition on Land and Underwater","authors":"Tasuku Makabe, T. Anzai, Youhei Kakiuchi, K. Okada, M. Inaba","doi":"10.1109/HUMANOIDS47582.2021.9555671","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555671","url":null,"abstract":"Humanoid research aimed at verifying human movements and substituting for work deals with movement acquisition in various environments to which humans can adapt. On the other hand, water exists as an environment in which humanoids cannot adapt, even though humans adapt and demand work alternatives. Therefore, there is room to construct a platform that can operate underwater and the ground and verify the method of acquiring operation underwater. This study constructs the humanoid that can operate in both land and water environments using modular components that can easily change the body structure. While changing the environment’s force, such as frictional force, water resistance, and buoyancy, the humanoid performed moving movements such as swimming and walking in multiple lands and water settings. Walking motion experiments have shown that the underwater environment’s viscosity effectively reduces the speed of falls and prevents damage in humanoid experiments. We investigated a solution to the problem that humanoids are vulnerable to disturbance in an environment where friction is brutal to obtain through swimming.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134641307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motion Modification Method of Musculoskeletal Humanoids by Human Teaching Using Muscle-Based Compensation Control 基于肌肉补偿控制的人体教学中肌肉骨骼类人形机器人运动修饰方法
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555772
Kento Kawaharazuka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba
While musculoskeletal humanoids have the advantages of various biomimetic structures, it is difficult to accurately control the body, which is challenging to model. Although various learning-based control methods have been developed so far, they cannot completely absorb model errors, and recognition errors are also bound to occur. In this paper, we describe a method to modify the movement of the musculoskeletal humanoid by applying external force during the movement, taking advantage of its flexible body. Considering the fact that the joint angles cannot be measured, and that the external force greatly affects the nonlinear elastic element and not the actuator, the modified motion is reproduced by the proposed muscle-based compensation control. This method is applied to a musculoskeletal humanoid, Musashi, and its effectiveness is confirmed.
虽然肌肉骨骼类人具有各种仿生结构的优点,但难以精确控制身体,这对建模具有挑战性。虽然目前已经发展了各种基于学习的控制方法,但它们不能完全吸收模型误差,也必然会出现识别误差。在本文中,我们描述了一种通过在运动过程中施加外力来改变肌肉骨骼类人形机器人运动的方法,利用其灵活的身体。考虑到关节角度无法测量,且外力对非线性弹性元件的影响较大而对作动器的影响较小,提出的基于肌肉的补偿控制可以再现修正后的运动。该方法被应用于一个肌肉骨骼类人,武藏,并证实了其有效性。
{"title":"Motion Modification Method of Musculoskeletal Humanoids by Human Teaching Using Muscle-Based Compensation Control","authors":"Kento Kawaharazuka, Yuya Koga, Manabu Nishiura, Yusuke Omura, Yuki Asano, K. Okada, Koji Kawasaki, M. Inaba","doi":"10.1109/HUMANOIDS47582.2021.9555772","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555772","url":null,"abstract":"While musculoskeletal humanoids have the advantages of various biomimetic structures, it is difficult to accurately control the body, which is challenging to model. Although various learning-based control methods have been developed so far, they cannot completely absorb model errors, and recognition errors are also bound to occur. In this paper, we describe a method to modify the movement of the musculoskeletal humanoid by applying external force during the movement, taking advantage of its flexible body. Considering the fact that the joint angles cannot be measured, and that the external force greatly affects the nonlinear elastic element and not the actuator, the modified motion is reproduced by the proposed muscle-based compensation control. This method is applied to a musculoskeletal humanoid, Musashi, and its effectiveness is confirmed.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Balancing Control of a Spring-legged Robot based on a High-order Sliding Mode Observer 基于高阶滑模观测器的弹簧腿机器人鲁棒平衡控制
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555776
Juan D. Gamba, A. C. Leite, R. Featherstone
This paper presents a simulation study of the balancing problem for a monopod robot in which the lower body (the leg) has been modified to include a passively spring-loaded prismatic joint. Such a mechanism can move by hopping but can also stand and balance on a single point. We aim to investigate the extent to which a balance controller can deal with the large values and rapid changes in the spring-damper forces, while controlling the absolute positions and orientations of its parts and balancing on one leg. It can be shown that a good performance is achieved if the spring-loaded joint is instrumented and calibrated so that its position and velocity, as well as the stiffness and damping coefficients, are considered when calculating the controller state variables. We also demonstrate the effectiveness of the balance controller by adding a high-order sliding mode (HOSM) observer based on the finite-time algorithm for robust parameter estimation of the stiffness and damping coefficients. The stability analysis and convergence proofs are presented based on the Lyapunov stability theory. Numerical simulations are included to illustrate the performance and feasibility of the proposed methodology.
本文对单足机器人的平衡问题进行了仿真研究,该机器人的下体(腿)被修改为包含被动弹簧加载的移动关节。这种机制可以通过跳跃移动,但也可以在一个点上站立和平衡。我们的目标是研究平衡控制器能在多大程度上处理弹簧-阻尼器力的大值和快速变化,同时控制其部件的绝对位置和方向,并在一条腿上保持平衡。结果表明,在计算控制器状态变量时,考虑弹簧关节的位置和速度以及刚度和阻尼系数,对弹簧关节进行测量和校准,可以获得良好的性能。我们还通过添加基于有限时间算法的高阶滑模(HOSM)观测器来进行刚度和阻尼系数的鲁棒参数估计,证明了平衡控制器的有效性。基于李雅普诺夫稳定性理论,给出了稳定性分析和收敛证明。数值模拟说明了所提出方法的性能和可行性。
{"title":"Robust Balancing Control of a Spring-legged Robot based on a High-order Sliding Mode Observer","authors":"Juan D. Gamba, A. C. Leite, R. Featherstone","doi":"10.1109/HUMANOIDS47582.2021.9555776","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555776","url":null,"abstract":"This paper presents a simulation study of the balancing problem for a monopod robot in which the lower body (the leg) has been modified to include a passively spring-loaded prismatic joint. Such a mechanism can move by hopping but can also stand and balance on a single point. We aim to investigate the extent to which a balance controller can deal with the large values and rapid changes in the spring-damper forces, while controlling the absolute positions and orientations of its parts and balancing on one leg. It can be shown that a good performance is achieved if the spring-loaded joint is instrumented and calibrated so that its position and velocity, as well as the stiffness and damping coefficients, are considered when calculating the controller state variables. We also demonstrate the effectiveness of the balance controller by adding a high-order sliding mode (HOSM) observer based on the finite-time algorithm for robust parameter estimation of the stiffness and damping coefficients. The stability analysis and convergence proofs are presented based on the Lyapunov stability theory. Numerical simulations are included to illustrate the performance and feasibility of the proposed methodology.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115450553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Identification of Common Force-based Robot Skills from the Human and Robot Perspective 基于人与机器人视角的常见力机器人技能识别
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555681
Thomas Eiband, Dongheui Lee
Learning from Demonstration (LfD) can significantly speed up the knowledge transfer from human to robot, which has been proven for relatively unconstrained actions such as pick and place. However, transferring contact or force-based skills (contact skills) to a robot is noticeably harder since force and position constraints need to be considered simultaneously. We propose a set of contact skills, which differ in the force and kinematic constraints. In a first user study, several subjects were asked to term a variety of force-based interactions, from which skill names were derived. In a second and third user study, the identified skill names are used to let a test group of subjects classify the shown interactions. To evaluate the skill recognition from the robot perspective, we propose a feature-based classification scheme to recognize such skills with a robotic system in a LfD setting. Our findings prove that humans are able to understand the meaning of the different skills and, using the classification pipeline, the robot is able to recognize the different skills from human demonstrations.
从演示中学习(LfD)可以显著加快从人到机器人的知识转移,这已被证明适用于相对无约束的动作,如拾取和放置。然而,将接触或基于力的技能(接触技能)传递给机器人显然更加困难,因为需要同时考虑力和位置约束。我们提出了一套接触技巧,不同的力和运动学约束。在第一个用户研究中,几个受试者被要求描述各种基于力的交互,从这些交互中衍生出技能名称。在第二个和第三个用户研究中,识别的技能名称用于让测试组的受试者对显示的交互进行分类。为了从机器人的角度评估技能识别,我们提出了一个基于特征的分类方案来识别机器人系统在LfD环境下的技能。我们的研究结果证明,人类能够理解不同技能的含义,并且使用分类管道,机器人能够从人类演示中识别不同的技能。
{"title":"Identification of Common Force-based Robot Skills from the Human and Robot Perspective","authors":"Thomas Eiband, Dongheui Lee","doi":"10.1109/HUMANOIDS47582.2021.9555681","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555681","url":null,"abstract":"Learning from Demonstration (LfD) can significantly speed up the knowledge transfer from human to robot, which has been proven for relatively unconstrained actions such as pick and place. However, transferring contact or force-based skills (contact skills) to a robot is noticeably harder since force and position constraints need to be considered simultaneously. We propose a set of contact skills, which differ in the force and kinematic constraints. In a first user study, several subjects were asked to term a variety of force-based interactions, from which skill names were derived. In a second and third user study, the identified skill names are used to let a test group of subjects classify the shown interactions. To evaluate the skill recognition from the robot perspective, we propose a feature-based classification scheme to recognize such skills with a robotic system in a LfD setting. Our findings prove that humans are able to understand the meaning of the different skills and, using the classification pipeline, the robot is able to recognize the different skills from human demonstrations.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114635104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1