Whole-Body Teleoperation for Mobile Manipulation at Zero Added Cost

IF 5.3 2区 计算机科学 Q2 ROBOTICS IEEE Robotics and Automation Letters Pub Date : 2025-02-10 DOI:10.1109/LRA.2025.3540582
Daniel Honerkamp;Harsh Mahesheka;Jan Ole von Hartz;Tim Welschehold;Abhinav Valada
{"title":"Whole-Body Teleoperation for Mobile Manipulation at Zero Added Cost","authors":"Daniel Honerkamp;Harsh Mahesheka;Jan Ole von Hartz;Tim Welschehold;Abhinav Valada","doi":"10.1109/LRA.2025.3540582","DOIUrl":null,"url":null,"abstract":"Demonstration data plays a key role in learning complex behaviors and training robotic foundation models. While effective control interfaces exist for static manipulators, data collection remains cumbersome and time intensive for mobile manipulators due to their large number of degrees of freedom. While specialized hardware, avatars, or motion tracking can enable whole-body control, these approaches are either expensive, robot-specific, or suffer from the embodiment mismatch between robot and human demonstrator. In this work, we present MoMa-Teleop, a novel teleoperation method that infers end-effector motions from existing interfaces and delegates the base motions to a previously developed reinforcement learning agent, leaving the operator to focus fully on the task-relevant end-effector motions. This enables whole-body teleoperation of mobile manipulators with no additional hardware or setup costs via standard interfaces such as joysticks or hand guidance. Moreover, the operator is not bound to a tracked workspace and can move freely with the robot over spatially extended tasks. We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks. As the generated data covers diverse whole-body motions without embodiment mismatch, it enables efficient imitation learning. By focusing on task-specific end-effector motions, our approach learns skills that transfer to unseen settings, such as new obstacles or changed object positions, from as little as five demonstrations.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3198-3205"},"PeriodicalIF":5.3000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10879372/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Demonstration data plays a key role in learning complex behaviors and training robotic foundation models. While effective control interfaces exist for static manipulators, data collection remains cumbersome and time intensive for mobile manipulators due to their large number of degrees of freedom. While specialized hardware, avatars, or motion tracking can enable whole-body control, these approaches are either expensive, robot-specific, or suffer from the embodiment mismatch between robot and human demonstrator. In this work, we present MoMa-Teleop, a novel teleoperation method that infers end-effector motions from existing interfaces and delegates the base motions to a previously developed reinforcement learning agent, leaving the operator to focus fully on the task-relevant end-effector motions. This enables whole-body teleoperation of mobile manipulators with no additional hardware or setup costs via standard interfaces such as joysticks or hand guidance. Moreover, the operator is not bound to a tracked workspace and can move freely with the robot over spatially extended tasks. We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks. As the generated data covers diverse whole-body motions without embodiment mismatch, it enables efficient imitation learning. By focusing on task-specific end-effector motions, our approach learns skills that transfer to unseen settings, such as new obstacles or changed object positions, from as little as five demonstrations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
零成本移动操作的全身远程操作
演示数据在学习复杂行为和训练机器人基础模型中起着关键作用。虽然静态机械臂存在有效的控制接口,但由于移动机械臂具有大量的自由度,因此数据收集仍然是繁琐且耗时的。虽然专门的硬件、虚拟形象或运动跟踪可以实现全身控制,但这些方法要么昂贵,要么是机器人专用的,要么是机器人和人类演示者之间的体现不匹配。在这项工作中,我们提出了MoMa-Teleop,这是一种新的远程操作方法,可以从现有的界面推断末端执行器的运动,并将基本运动委托给先前开发的强化学习代理,使操作员完全专注于与任务相关的末端执行器运动。这使得移动机械手的全身远程操作不需要额外的硬件或安装成本,只需通过标准接口,如操纵杆或手动指导。此外,操作员不受跟踪工作空间的约束,可以随机器人自由移动,完成空间扩展任务。我们证明,我们的方法可以显著减少各种机器人和任务的任务完成时间。由于生成的数据涵盖了不同的全身运动,没有体现错配,因此可以实现高效的模仿学习。通过专注于特定任务的末端执行器运动,我们的方法学习了一些技能,这些技能可以转移到看不见的环境中,比如新的障碍物或物体位置的变化,只需五次演示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
期刊最新文献
Closed-loop Control of Steerable Balloon Endoscopes for Robot-assisted Transcatheter Intracardiac Procedures. Dynamic-ICP: Doppler-Aware Iterative Closest Point Registration for Dynamic Scenes A Valve-Less Electro-Hydrostatic Powered Prosthetic Foot to Improve the Power Efficiency During Walking Deep Learning-Based Fourier Registration for Forward-Looking Sonar Odometry in Texture-Sparse Underwater Environments Towards Quadrupedal Jumping and Walking for Dynamic Locomotion Using Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1