RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3D with graph networks

Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, Jiajun Wu
{"title":"RoboCraft: Learning to see, simulate, and shape elasto-plastic objects in 3D with graph networks","authors":"Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, Jiajun Wu","doi":"10.1177/02783649231219020","DOIUrl":null,"url":null,"abstract":"Modeling and manipulating elasto-plastic objects are essential capabilities for robots to perform complex industrial and household interaction tasks (e.g., stuffing dumplings, rolling sushi, and making pottery). However, due to the high degrees of freedom of elasto-plastic objects, significant challenges exist in virtually every aspect of the robotic manipulation pipeline, for example, representing the states, modeling the dynamics, and synthesizing the control signals. We propose to tackle these challenges by employing a particle-based representation for elasto-plastic objects in a model-based planning framework. Our system, RoboCraft, only assumes access to raw RGBD visual observations. It transforms the sensory data into particles and learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system. The learned model can then be coupled with model predictive control (MPC) algorithms to plan the robot’s behavior. We show through experiments that with just 10 min of real-world robot interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various complex target shapes, including shapes that the robot has never encountered before. We perform systematic evaluations in both simulation and the real world to demonstrate the robot’s manipulation capabilities.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"64 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Robotics Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02783649231219020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Modeling and manipulating elasto-plastic objects are essential capabilities for robots to perform complex industrial and household interaction tasks (e.g., stuffing dumplings, rolling sushi, and making pottery). However, due to the high degrees of freedom of elasto-plastic objects, significant challenges exist in virtually every aspect of the robotic manipulation pipeline, for example, representing the states, modeling the dynamics, and synthesizing the control signals. We propose to tackle these challenges by employing a particle-based representation for elasto-plastic objects in a model-based planning framework. Our system, RoboCraft, only assumes access to raw RGBD visual observations. It transforms the sensory data into particles and learns a particle-based dynamics model using graph neural networks (GNNs) to capture the structure of the underlying system. The learned model can then be coupled with model predictive control (MPC) algorithms to plan the robot’s behavior. We show through experiments that with just 10 min of real-world robot interaction data, our robot can learn a dynamics model that can be used to synthesize control signals to deform elasto-plastic objects into various complex target shapes, including shapes that the robot has never encountered before. We perform systematic evaluations in both simulation and the real world to demonstrate the robot’s manipulation capabilities.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
机器人工艺:学习用图形网络观察、模拟和塑造三维弹性塑料物体
建模和操纵弹塑性物体是机器人执行复杂的工业和家庭交互任务(如包饺子、搓寿司和制作陶器)的基本能力。然而,由于弹塑性物体的自由度很高,机器人操纵管道的几乎每个方面都面临着巨大挑战,例如状态表示、动态建模和控制信号合成。我们建议在基于模型的规划框架中采用基于粒子的弹塑性物体表示法来应对这些挑战。我们的 RoboCraft 系统只需要获取原始的 RGBD 视觉观测数据。它将感知数据转换为粒子,并使用图神经网络(GNN)学习基于粒子的动力学模型,以捕捉底层系统的结构。学习到的模型可以与模型预测控制(MPC)算法相结合,规划机器人的行为。我们通过实验表明,只需 10 分钟的真实世界机器人交互数据,我们的机器人就能学习到一个动力学模型,该模型可用于合成控制信号,将弹塑性物体变形为各种复杂的目标形状,包括机器人从未遇到过的形状。我们在模拟和真实世界中进行了系统评估,以展示机器人的操纵能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Transfer learning in robotics: An upcoming breakthrough? A review of promises and challenges Selected papers from WAFR 2022 Continuum concentric push–pull robots: A Cosserat rod model Sim-to-real transfer of adaptive control parameters for AUV stabilisation under current disturbance No compromise in solution quality: Speeding up belief-dependent continuous partially observable Markov decision processes via adaptive multilevel simplification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1