Learning prohibited and authorised grasping locations from a few demonstrations

François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru
{"title":"Learning prohibited and authorised grasping locations from a few demonstrations","authors":"François Hélénon, Laurent Bimont, E. Nyiri, Stéphane Thiery, O. Gibaru","doi":"10.1109/RO-MAN47096.2020.9223486","DOIUrl":null,"url":null,"abstract":"Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Our motivation is to ease robots’ reconfiguration for pick and place tasks in an industrial context. This paper proposes a fast learner neural network model trained from one or a few demonstrations in less than 5 minutes, able to efficiently predict grasping locations on a specific object. The proposed methodology is easy to apply in an industrial context as it is exclusively based on the operator’s demonstrations and does not require a CAD model, existing database or simulator. As predictions of a neural network can be erroneous especially when trained with very few data, we propose to indicate both authorised and prohibited locations for safety reasons. It allows us to handle fragile objects or to perform task-oriented grasping. Our model learns the semantic representation of objects (prohibited/authorised) thanks to a simplified data representation, a simplified neural network architecture and an adequate training framework. We trained specific networks for different objects and conducted experiments on a real 7-DOF robot which showed good performances (70 to 100% depending on the object), using only one demonstration. The proposed model is able to generalise well as performances remain good even when grasping several similar objects with the same network trained on one of them.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从几个演示中学习禁止和授权抓取的位置
我们的动机是简化机器人在工业环境中拾取和放置任务的重新配置。本文提出了一种快速学习神经网络模型,在不到5分钟的时间内从一个或几个演示中训练,能够有效地预测特定物体上的抓取位置。所提出的方法很容易应用于工业环境,因为它完全基于操作人员的演示,不需要CAD模型、现有数据库或模拟器。由于神经网络的预测可能是错误的,特别是在数据很少的情况下,出于安全原因,我们建议指出授权和禁止的位置。它使我们能够处理易碎的物体或执行面向任务的抓取。我们的模型通过简化的数据表示、简化的神经网络架构和适当的训练框架来学习对象(禁止/授权)的语义表示。我们针对不同的对象训练了特定的网络,并在一个真实的7-DOF机器人上进行了实验,该机器人表现出了良好的性能(取决于对象的70 - 100%),仅使用了一次演示。所提出的模型能够很好地泛化,即使在抓取几个相似的对象时,在其中一个对象上训练了相同的网络,也能保持良好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Adaptive Control Approach to Robotic Assembly with Uncertainties in Vision and Dynamics Affective Touch Robots with Changing Textures and Movements Interactive Robotic Systems as Boundary-Crossing Robots – the User’s View* Development of a Learning-Based Intention Detection Framework for Power-Assisted Manual Wheelchair Users Multi-user Robot Impression with a Virtual Agent and Features Modification According to Real-time Emotion from Physiological Signals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1