首页 > 最新文献

2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)最新文献

英文 中文
Autonomous Bimanual Functional Regrasping of Novel Object Class Instances 新对象类实例的自主双手功能重抓取
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035030
D. Pavlichenko, Diego Rodriguez, Christian Lenz, Max Schwarz, Sven Behnke
In human-made scenarios, robots need to be able to fully operate objects in their surroundings, i.e., objects are required to be functionally grasped rather than only picked. This imposes very strict constraints on the object pose such that a direct grasp can be performed. Inspired by the anthropomorphic nature of humanoid robots, we propose an approach that first grasps an object with one hand, obtaining full control over its pose, and performs the functional grasp with the second hand subsequently. Thus, we develop a fully autonomous pipeline for dual-arm functional regrasping of novel familiar objects, i.e., objects never seen before that belong to a known object category, e.g., spray bottles. This process involves semantic segmentation, object pose estimation, non-rigid mesh registration, grasp sampling, handover pose generation and in-hand pose refinement. The latter is used to compensate for the unpredictable object movement during the first grasp. The approach is applied to a human-like upper body. To the best knowledge of the authors, this is the first system that exhibits autonomous bimanual functional regrasping capabilities. We demonstrate that our system yields reliable success rates and can be applied on-line to real-world tasks using only one off-the-shelf RGB-D sensor.
在人造场景中,机器人需要能够完全操作周围的物体,也就是说,物体需要被功能性地抓住,而不仅仅是拾取。这对物体姿势施加了非常严格的约束,以便可以执行直接抓取。受仿人机器人拟人化特性的启发,我们提出了一种先用一只手抓住物体,获得对其姿态的完全控制,然后用第二只手进行功能性抓取的方法。因此,我们开发了一个完全自主的管道,用于双臂功能重新抓取新的熟悉物体,即以前从未见过的属于已知物体类别的物体,例如喷雾瓶。该过程包括语义分割、目标姿态估计、非刚性网格配准、抓取采样、切换姿态生成和手部姿态优化。后者用于补偿第一次抓取时不可预测的物体运动。这种方法被应用于类似人类的上半身。据作者所知,这是第一个展示自主双手功能抓取能力的系统。我们证明了我们的系统产生可靠的成功率,并且可以仅使用一个现成的RGB-D传感器在线应用于现实世界的任务。
{"title":"Autonomous Bimanual Functional Regrasping of Novel Object Class Instances","authors":"D. Pavlichenko, Diego Rodriguez, Christian Lenz, Max Schwarz, Sven Behnke","doi":"10.1109/Humanoids43949.2019.9035030","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035030","url":null,"abstract":"In human-made scenarios, robots need to be able to fully operate objects in their surroundings, i.e., objects are required to be functionally grasped rather than only picked. This imposes very strict constraints on the object pose such that a direct grasp can be performed. Inspired by the anthropomorphic nature of humanoid robots, we propose an approach that first grasps an object with one hand, obtaining full control over its pose, and performs the functional grasp with the second hand subsequently. Thus, we develop a fully autonomous pipeline for dual-arm functional regrasping of novel familiar objects, i.e., objects never seen before that belong to a known object category, e.g., spray bottles. This process involves semantic segmentation, object pose estimation, non-rigid mesh registration, grasp sampling, handover pose generation and in-hand pose refinement. The latter is used to compensate for the unpredictable object movement during the first grasp. The approach is applied to a human-like upper body. To the best knowledge of the authors, this is the first system that exhibits autonomous bimanual functional regrasping capabilities. We demonstrate that our system yields reliable success rates and can be applied on-line to real-world tasks using only one off-the-shelf RGB-D sensor.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116245265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Autonomous Learning of Assembly Tasks from the Corresponding Disassembly Tasks 从相应的拆卸任务中自主学习组装任务
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035052
Mihael Simonič, L. Žlajpah, A. Ude, B. Nemec
An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a high-level decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing.
在许多情况下,汇编任务只是对应的反汇编任务的反向执行。在装配过程中,被装配对象连续地从一个状态传递到另一个状态,直到完成,并且可能的运动集变得越来越受约束。基于观察到物理约束任务的自主学习是有利的,我们在装配中使用在拆卸学习中获得的信息。对于拆卸策略的自主学习,我们建议使用分层强化学习,其中学习被分解为高级决策和底层低级智能顺应控制器,后者利用约束环境中的自然运动。在逆向执行拆卸策略时,通过迭代学习控制器进一步优化运动。该方法在两个具有挑战性的任务中得到了验证-迷宫学习问题和将汽车灯泡插入外壳的自主学习。
{"title":"Autonomous Learning of Assembly Tasks from the Corresponding Disassembly Tasks","authors":"Mihael Simonič, L. Žlajpah, A. Ude, B. Nemec","doi":"10.1109/Humanoids43949.2019.9035052","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035052","url":null,"abstract":"An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a high-level decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132600029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dual-Arm In-Hand Manipulation Using Visual Feedback 使用视觉反馈的双手操纵
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035058
S. Cruciani, Kaiyu Hang, Christian Smith, D. Kragic
In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object's shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object's pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.
在这项工作中,我们解决了基于视觉输入执行手持操作的问题。给定初始抓握,机器人必须在不释放物体的情况下改变其抓握构型。提出了一种基于物体形状信息的双臂机器人手持操作规划与执行方法。根据物体上的可用信息,可以是完整的点云,也可以是部分数据,我们的方法计划了一系列的旋转和平移来重新配置物体的姿态。这个序列是使用定义为两个机器人手臂之间的相对运动的不可抓握推进来执行的。
{"title":"Dual-Arm In-Hand Manipulation Using Visual Feedback","authors":"S. Cruciani, Kaiyu Hang, Christian Smith, D. Kragic","doi":"10.1109/Humanoids43949.2019.9035058","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035058","url":null,"abstract":"In this work, we address the problem of executing in-hand manipulation based on visual input. Given an initial grasp, the robot has to change its grasp configuration without releasing the object. We propose a method for in-hand manipulation planning and execution based on information on the object's shape using a dual-arm robot. From the available information on the object, which can be a complete point cloud but also partial data, our method plans a sequence of rotations and translations to reconfigure the object's pose. This sequence is executed using non-prehensile pushes defined as relative motions between the two robot arms.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132151535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot 基于RGB数据的地形分割与粗糙度估计:路径规划在CENTAURO机器人上的应用
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035009
Vivekanandan Suryamurthy, V. S. Raghavan, Arturo Laurenzi, N. Tsagarakis, D. Kanoulas
Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles.
在现实世界环境中工作的机器人需要对它们所穿越的地形的主要物理特性有高度的感知理解。在未知环境中,粗糙度是一个重要的地形属性,可以在设计机器人控制/规划策略中发挥关键作用。在本文中,我们提出了一种快速预测地形(石头,沙子,道路/人行道,木材,草,金属)像素标记和粗糙度估计的方法,使用单个基于rgb的深度神经网络。真实世界的RGB图像被用来实验验证所提出的方法。此外,我们展示了我们提出的方法在半人马座式轮式腿机器人CENTAURO上的应用,通过将其与导航规划器相结合,该规划器能够重新配置腿部关节来修改机器人的足迹多边形,以达到稳定的目的或安全穿越障碍物。
{"title":"Terrain Segmentation and Roughness Estimation using RGB Data: Path Planning Application on the CENTAURO Robot","authors":"Vivekanandan Suryamurthy, V. S. Raghavan, Arturo Laurenzi, N. Tsagarakis, D. Kanoulas","doi":"10.1109/Humanoids43949.2019.9035009","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035009","url":null,"abstract":"Robots operating in real world environments require a high-level perceptual understanding of the chief physical properties of the terrain they are traversing. In unknown environments, roughness is one such important terrain property that could play a key role in devising robot control/planning strategies. In this paper, we present a fast method for predicting pixel-wise labels of terrain (stone, sand, road/sidewalk, wood, grass, metal) and roughness estimation, using a single RGB-based deep neural network. Real world RGB images are used to experimentally validate the presented approach. Furthermore, we demonstrate an application of our proposed method on the centaur-like wheeled-legged robot CENTAURO, by integrating it with a navigation planner that is capable of re-configuring the leg joints to modify the robot footprint polygon for stability purposes or for safe traversal among obstacles.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133422600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Sensor-based Whole-Body Planning/Replanning for Humanoid Robots 基于传感器的人形机器人全身规划/再规划
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035017
P. Ferrari, Marco Cognetti, G. Oriolo
We propose a sensor-based motion plan-ning/replanning method for a humanoid that must execute a task implicitly requiring locomotion. It is assumed that the environment is unknown and the robot is equipped with a depth sensor. The proposed approach hinges upon three modules that run concurrently: mapping, planning and execution. The mapping module is in charge of incrementally building a 3D environment map during the robot motion, based on the information provided by the depth sensor. The planning module computes future motions of the humanoid, taking into account the geometry of both the environment and the robot. To this end, it uses a 2-stages local motion planner consisting in a randomized CoM movement primitives-based algorithm that allows on-line replanning. Previously planned motions are performed through the execution module. The proposed approach is validated through simulations in V-REP for the humanoid robot NAO.
我们提出了一种基于传感器的运动规划/重规划方法,用于必须执行隐式运动任务的类人机器人。假设环境是未知的,机器人配备了深度传感器。所建议的方法依赖于并发运行的三个模块:映射、计划和执行。绘图模块负责根据深度传感器提供的信息,在机器人运动过程中逐步构建3D环境地图。规划模块计算人形机器人未来的运动,同时考虑到环境和机器人的几何形状。为此,它使用了一个2阶段的局部运动规划器,包括一个随机的基于CoM运动原语的算法,允许在线重新规划。先前计划的动作通过执行模块执行。通过人形机器人NAO的V-REP仿真验证了该方法的有效性。
{"title":"Sensor-based Whole-Body Planning/Replanning for Humanoid Robots","authors":"P. Ferrari, Marco Cognetti, G. Oriolo","doi":"10.1109/Humanoids43949.2019.9035017","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035017","url":null,"abstract":"We propose a sensor-based motion plan-ning/replanning method for a humanoid that must execute a task implicitly requiring locomotion. It is assumed that the environment is unknown and the robot is equipped with a depth sensor. The proposed approach hinges upon three modules that run concurrently: mapping, planning and execution. The mapping module is in charge of incrementally building a 3D environment map during the robot motion, based on the information provided by the depth sensor. The planning module computes future motions of the humanoid, taking into account the geometry of both the environment and the robot. To this end, it uses a 2-stages local motion planner consisting in a randomized CoM movement primitives-based algorithm that allows on-line replanning. Previously planned motions are performed through the execution module. The proposed approach is validated through simulations in V-REP for the humanoid robot NAO.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132593912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Active Vision for Extraction of Physically Plausible Support Relations 基于主动视觉的物理似是而非的支持关系提取
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035018
Markus Grotz, D. Sippel, T. Asfour
Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.
机器人在混乱的场景中操纵物体需要语义场景理解,即描述物体及其关系。在这样的场景中,关于物体之间物理上合理的支持关系的知识是行动执行的关键。然而,由于遮挡,支持关系通常不能仅从单个视图可靠地推断出来。在这项工作中,我们提出了一种主动视觉系统,可以减轻遮挡,并探索场景中的物体支持关系。我们扩展了以前的工作,其中物理上合理的支持关系是基于几何原语提取的。主动视觉系统根据物体之间现有的支持关系生成候选视图,并选择下一个最佳视图。我们在仿真和仿人机器人ARMAR-6上评估了我们的方法,并表明主动视觉系统通过从多个视图中提取物理上合理的支持关系来改进语义场景模型。
{"title":"Active Vision for Extraction of Physically Plausible Support Relations","authors":"Markus Grotz, D. Sippel, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035018","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035018","url":null,"abstract":"Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133711209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Development of a Humanoid Dual Arm System for a Single Spherical Wheeled Balancing Mobile Robot 单球面轮式平衡移动机器人仿人双臂系统的研制
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9034999
Roberto Shut, R. Hollis
This paper presents a new 14-DoF dual manipulation system for the CMU ballbot. The result is a new type of robot that combines smooth omnidirectional motion with the capability to interact with objects and the environment through manipulation. The system includes a pair of 7-DoF arms. Each arm weighs 12.9 kg, with a reach of 0.815 m, and a maximum payload of 10 kg at full extension. The ballbot's arms have a larger payload-to-weight ratio than commercial cobot arms with similar or greater payload. Design features include highly integrated sensor-actuator-control units in each joint, lightweight exoskeleton structure, and anthropomorphic kinematics. The integration of the arms with the CMU ballbot is demonstrated through heavy payload carrying and balancing experiments.
本文提出了一种新型的14自由度球机器人双操纵系统。结果是一种新型机器人,它结合了平滑的全向运动和通过操纵与物体和环境交互的能力。该系统包括一对7自由度臂。每条臂重12.9公斤,到达0.815米,最大有效载荷为10公斤。球机器人的手臂比商业合作机器人的手臂具有更大的有效载荷重量比,具有相似或更大的有效载荷。设计特点包括每个关节高度集成的传感器-执行器-控制单元,轻量级外骨骼结构和拟人化运动学。通过大载荷承载和平衡实验,验证了机械臂与CMU球机器人的一体化。
{"title":"Development of a Humanoid Dual Arm System for a Single Spherical Wheeled Balancing Mobile Robot","authors":"Roberto Shut, R. Hollis","doi":"10.1109/Humanoids43949.2019.9034999","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9034999","url":null,"abstract":"This paper presents a new 14-DoF dual manipulation system for the CMU ballbot. The result is a new type of robot that combines smooth omnidirectional motion with the capability to interact with objects and the environment through manipulation. The system includes a pair of 7-DoF arms. Each arm weighs 12.9 kg, with a reach of 0.815 m, and a maximum payload of 10 kg at full extension. The ballbot's arms have a larger payload-to-weight ratio than commercial cobot arms with similar or greater payload. Design features include highly integrated sensor-actuator-control units in each joint, lightweight exoskeleton structure, and anthropomorphic kinematics. The integration of the arms with the CMU ballbot is demonstrated through heavy payload carrying and balancing experiments.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs 使用灵巧操作图放置具有先前在手操作的对象
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035033
Joshua A. Haustein, S. Cruciani, Rizwan Asif, Kaiyu Hang, D. Kragic
We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot's initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm's performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn't be possible.
我们解决的问题是规划与机器人机械手抓取对象的位置。更具体地说,机器人的任务是放置抓取的物体,使放置偏好函数最大化。为此,我们提出了一种方法,使用手操作来调整机器人的初始抓取以扩展可到达的位置集。给定一个初始抓握点,该算法计算出一组抓握点,这些抓握点可以通过推动和旋转手中的物体来达到。有了这组可到达的抓地点,它就会搜索一个使偏好函数最大化的稳定位置。如果成功,它将返回一系列的手推,将初始抓取调整为更有利的抓取,并将物体移动到放置位置。我们评估了算法在各种放置场景中的性能,并观察了它在包含许多障碍物的挑战性场景中的有效性。我们的实验表明,用手操作的再抓取提高了机器人可以达到的放置质量。特别是,它使算法能够在初始抓取不可能安全放置的情况下找到解决方案。
{"title":"Placing Objects with prior In-Hand Manipulation using Dexterous Manipulation Graphs","authors":"Joshua A. Haustein, S. Cruciani, Rizwan Asif, Kaiyu Hang, D. Kragic","doi":"10.1109/Humanoids43949.2019.9035033","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035033","url":null,"abstract":"We address the problem of planning the placement of a grasped object with a robot manipulator. More specifically, the robot is tasked to place the grasped object such that a placement preference function is maximized. For this, we present an approach that uses in-hand manipulation to adjust the robot's initial grasp to extend the set of reachable placements. Given an initial grasp, the algorithm computes a set of grasps that can be reached by pushing and rotating the object in-hand. With this set of reachable grasps, it then searches for a stable placement that maximizes the preference function. If successful it returns a sequence of in-hand pushes to adjust the initial grasp to a more advantageous grasp together with a transport motion that carries the object to the placement. We evaluate our algorithm's performance on various placing scenarios, and observe its effectiveness also in challenging scenes containing many obstacles. Our experiments demonstrate that re-grasping with in-hand manipulation increases the quality of placements the robot can reach. In particular, it enables the algorithm to find solutions in situations where safe placing with the initial grasp wouldn't be possible.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129422746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Integration of Dual-Arm Manipulation in a Passivity Based Whole-Body Controller for Torque-Controlled Humanoid Robots 力矩控制类人机器人全身无源控制器中双臂操作的集成
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035010
J. Garcia-Haro, Bernd Henze, George Mesesan, Santiago Martínez de la Casa Díaz, C. Ott
This work presents an extension of balance control for torque-controlled humanoid robots. Within a non-strict task hierarchy, the controller allows the robot to use the feet end-effectors to balance, while the remaining hand end-effectors can be used to perform Dual-Arm manipulation. The controller generates a passive and compliance behaviour to regulate the location of the centre of mass (CoM), the orientation of the hip and the poses of each end-effector assigned to the task of interaction (in this case bi-manipulation). Then, an appropriate wrench (force and torque) is applied to each of the end-effectors employed for the task to achieve this purpose. Now, in this new controller, the essential requirement focuses on the fact that the desired wrench in the CoM is computed through the sum of the balancing and bi-manipulation wrenches. The bi-manipulation wrenches are obtained through a new dynamic model that allows executing tasks of approaching the grip and manipulation of large objects compliantly. On the other hand, the feedback controller has been maintained but in combination with a bi-manipulation-oriented feedforward control to improve the performance in the object trajectory tracking. This controller is tested in different experiments with the robot TORO.
本文提出了力矩控制类人机器人平衡控制的一种扩展。在非严格的任务层次结构中,控制器允许机器人使用足端执行器来平衡,而剩余的手端执行器可以用于执行双臂操作。控制器产生被动和顺应行为来调节质心(CoM)的位置,髋关节的方向和分配给交互任务的每个末端执行器的姿势(在这种情况下双操作)。然后,将适当的扳手(力和扭矩)应用于用于实现此目的的任务的每个末端执行器。现在,在这个新的控制器中,基本要求集中在CoM中期望的扳手是通过平衡和双操作扳手的总和来计算的。双操作扳手是通过一种新的动态模型获得的,该模型允许执行接近握力和操纵大型物体的任务。另一方面,在保留反馈控制器的基础上,结合面向双操作的前馈控制,提高了目标轨迹跟踪的性能。该控制器在机器人TORO的不同实验中进行了测试。
{"title":"Integration of Dual-Arm Manipulation in a Passivity Based Whole-Body Controller for Torque-Controlled Humanoid Robots","authors":"J. Garcia-Haro, Bernd Henze, George Mesesan, Santiago Martínez de la Casa Díaz, C. Ott","doi":"10.1109/Humanoids43949.2019.9035010","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035010","url":null,"abstract":"This work presents an extension of balance control for torque-controlled humanoid robots. Within a non-strict task hierarchy, the controller allows the robot to use the feet end-effectors to balance, while the remaining hand end-effectors can be used to perform Dual-Arm manipulation. The controller generates a passive and compliance behaviour to regulate the location of the centre of mass (CoM), the orientation of the hip and the poses of each end-effector assigned to the task of interaction (in this case bi-manipulation). Then, an appropriate wrench (force and torque) is applied to each of the end-effectors employed for the task to achieve this purpose. Now, in this new controller, the essential requirement focuses on the fact that the desired wrench in the CoM is computed through the sum of the balancing and bi-manipulation wrenches. The bi-manipulation wrenches are obtained through a new dynamic model that allows executing tasks of approaching the grip and manipulation of large objects compliantly. On the other hand, the feedback controller has been maintained but in combination with a bi-manipulation-oriented feedforward control to improve the performance in the object trajectory tracking. This controller is tested in different experiments with the robot TORO.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Motion Retargeting and Control for Teleoperated Physical Human-Robot Interaction 遥操作物理人机交互的运动重瞄准与控制
Pub Date : 2019-10-01 DOI: 10.1109/Humanoids43949.2019.9035060
Akshit Kaplish, K. Yamane
In this paper, we present motion retargeting and control algorithms for teleoperated physical human-robot interaction (pHRI). We employ unilateral teleoperation in which a sensor-equipped operator interacts with a static object such as a mannequin to provide the motion and force references. The controller takes the references as well as current robot states and contact forces as input, and outputs the joint torques to track the operator's contact forces while preserving the expression and style of the motion. We develop a hierarchical optimization scheme combined with a motion retargeting algorithm that resolves the discrepancy between the contact states of the operator and robot due to different kinematic parameters and body shapes. We demonstrate the controller performance on a dual-arm robot with soft skin and contact force sensors using pre-recorded human demonstrations of hugging.
在本文中,我们提出了用于遥操作物理人机交互(pHRI)的运动重定向和控制算法。我们采用单边远程操作,其中配备传感器的操作员与静态对象(如人体模型)交互,以提供运动和力参考。控制器以参考点、机器人当前状态和接触力为输入,输出关节力矩跟踪操作者的接触力,同时保持运动的表达式和风格。提出了一种结合运动重定向算法的分层优化方案,解决了由于不同的运动学参数和身体形状而导致的操作者和机器人接触状态的差异。我们使用预先录制的人类拥抱演示,在具有柔软皮肤和接触力传感器的双臂机器人上演示控制器的性能。
{"title":"Motion Retargeting and Control for Teleoperated Physical Human-Robot Interaction","authors":"Akshit Kaplish, K. Yamane","doi":"10.1109/Humanoids43949.2019.9035060","DOIUrl":"https://doi.org/10.1109/Humanoids43949.2019.9035060","url":null,"abstract":"In this paper, we present motion retargeting and control algorithms for teleoperated physical human-robot interaction (pHRI). We employ unilateral teleoperation in which a sensor-equipped operator interacts with a static object such as a mannequin to provide the motion and force references. The controller takes the references as well as current robot states and contact forces as input, and outputs the joint torques to track the operator's contact forces while preserving the expression and style of the motion. We develop a hierarchical optimization scheme combined with a motion retargeting algorithm that resolves the discrepancy between the contact states of the operator and robot due to different kinematic parameters and body shapes. We demonstrate the controller performance on a dual-arm robot with soft skin and contact force sensors using pre-recorded human demonstrations of hugging.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114725886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1