Grasping novel objects with a dexterous robotic hand through neuroevolution

Pei-Chi Huang, J. Lehman, A. Mok, R. Miikkulainen, L. Sentis
{"title":"Grasping novel objects with a dexterous robotic hand through neuroevolution","authors":"Pei-Chi Huang, J. Lehman, A. Mok, R. Miikkulainen, L. Sentis","doi":"10.1109/CICA.2014.7013242","DOIUrl":null,"url":null,"abstract":"Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.","PeriodicalId":340740,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence in Control and Automation (CICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICA.2014.7013242","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

Robotic grasping of a target object without advance knowledge of its three-dimensional model is a challenging problem. Many studies indicate that robot learning from demonstration (LfD) is a promising way to improve grasping performance, but complete automation of the grasping task in unforeseen circumstances remains difficult. As an alternative to LfD, this paper leverages limited human supervision to achieve robotic grasping of unknown objects in unforeseen circumstances. The technical question is what form of human supervision best minimizes the effort of the human supervisor. The approach here applies a human-supplied bounding box to focus the robot's visual processing on the target object, thereby lessening the dimensionality of the robot's computer vision processing. After the human supervisor defines the bounding box through the man-machine interface, the rest of the grasping task is automated through a vision-based feature-extraction approach where the dexterous hand learns to grasp objects without relying on pre-computed object models through the NEAT neuroevolution algorithm. Given only low-level sensing data from a commercial depth sensor Kinect, our approach evolves neural networks to identify appropriate hand positions and orientations for grasping novel objects. Further, the machine learning results from simulation have been validated by transferring the training results to a physical robot called Dreamer made by the Meka Robotics company. The results demonstrate that grasping novel objects through exploiting neuroevolution from simulation to reality is possible.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于神经进化的灵巧机械手抓取新物体
机器人在不事先了解目标物体三维模型的情况下抓取目标物体是一个具有挑战性的问题。许多研究表明,机器人从演示中学习(LfD)是一种很有前途的提高抓取性能的方法,但在不可预见的情况下完全自动化抓取任务仍然是困难的。作为LfD的替代方案,本文利用有限的人类监督来实现机器人在不可预见的情况下抓取未知物体。技术上的问题是,何种形式的人类监督能最大限度地减少人类监督者的努力。该方法利用人类提供的边界框将机器人的视觉处理集中在目标物体上,从而减少了机器人计算机视觉处理的维度。在人类监督者通过人机界面定义边界框之后,剩余的抓取任务通过基于视觉的特征提取方法自动完成,其中灵巧的手通过NEAT神经进化算法学习抓取物体,而不依赖于预先计算的物体模型。仅考虑来自商业深度传感器Kinect的低级传感数据,我们的方法发展神经网络来识别适当的手部位置和方向,以抓取新物体。此外,通过将训练结果转移到Meka Robotics公司制造的一个名为“梦想者”的物理机器人上,模拟的机器学习结果得到了验证。结果表明,利用神经进化从模拟到现实的过程来抓取新物体是可能的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
One-class LS-SVM with zero leave-one-out error Enumeration of reachable, forbidden, live states of gen-left k-net system (with a non-sharing resource place) of Petri Nets Context-based adaptive robot behavior learning model (CARB-LM) New multiagent coordination optimization algorithms for mixed-binary nonlinear programming with control applications Ultra high frequency polynomial and sine artificial higher order neural networks for control signal generator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1