A 2-Stage Framework for Learning to Push Unknown Objects

Ziyan Gao, A. Elibol, N. Chong
{"title":"A 2-Stage Framework for Learning to Push Unknown Objects","authors":"Ziyan Gao, A. Elibol, N. Chong","doi":"10.1109/ICDL-EpiRob48136.2020.9278075","DOIUrl":null,"url":null,"abstract":"Robotic manipulation has been generally applied to particular settings and a limited number of known objects. In order to manipulate novel objects, robots need to be capable of discovering the physical properties of objects, such as the center of mass, and reorienting objects to the desired pose required for subsequent actions. In this work, we proposed a computationally efficient 2-stage framework for planar pushing, allowing a robot to push novel objects to a specified pose with a small amount of pushing steps. We developed three modules: Coarse Action Predictor (CAP), Forward Dynamic Estimator (FDE), and Physical Property Estimator (PPE). The CAP module predicts a mixture of Gaussian distribution of actions. FDE learns the causality between action and successive object state. PPE based on Recurrent Neural Network predicts the physical center of mass (PCOM) from the robot-object interaction. Our preliminary experiments show promising results to meet the practical application requirements of manipulating novel objects.","PeriodicalId":114948,"journal":{"name":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Robotic manipulation has been generally applied to particular settings and a limited number of known objects. In order to manipulate novel objects, robots need to be capable of discovering the physical properties of objects, such as the center of mass, and reorienting objects to the desired pose required for subsequent actions. In this work, we proposed a computationally efficient 2-stage framework for planar pushing, allowing a robot to push novel objects to a specified pose with a small amount of pushing steps. We developed three modules: Coarse Action Predictor (CAP), Forward Dynamic Estimator (FDE), and Physical Property Estimator (PPE). The CAP module predicts a mixture of Gaussian distribution of actions. FDE learns the causality between action and successive object state. PPE based on Recurrent Neural Network predicts the physical center of mass (PCOM) from the robot-object interaction. Our preliminary experiments show promising results to meet the practical application requirements of manipulating novel objects.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学习推未知物体的两阶段框架
机器人操作通常应用于特定的设置和有限数量的已知对象。为了操纵新物体,机器人需要能够发现物体的物理特性,例如质心,并将物体重新定向到后续动作所需的所需姿态。在这项工作中,我们提出了一个计算效率高的两阶段平面推框架,允许机器人用少量的推步骤将新物体推到指定的姿势。我们开发了三个模块:粗动作预测器(CAP),前向动态估计器(FDE)和物理性质估计器(PPE)。CAP模块预测动作的混合高斯分布。FDE学习动作和连续对象状态之间的因果关系。基于递归神经网络的PPE从机器人与物体的相互作用中预测物理质心。我们的初步实验结果表明,该方法可以满足操纵新物体的实际应用要求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
High-level representations through unconstrained sensorimotor learning Language Acquisition with Echo State Networks: Towards Unsupervised Learning Picture completion reveals developmental change in representational drawing ability: An analysis using a convolutional neural network Fast Developmental Stereo-Disparity Detectors Modeling robot co-representation: state-of-the-art, open issues, and predictive learning as a possible framework
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1