Affordance detection for task-specific grasping using deep learning

Mia Kokic, J. A. Stork, Joshua A. Haustein, D. Kragic
{"title":"Affordance detection for task-specific grasping using deep learning","authors":"Mia Kokic, J. A. Stork, Joshua A. Haustein, D. Kragic","doi":"10.1109/HUMANOIDS.2017.8239542","DOIUrl":null,"url":null,"abstract":"In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.","PeriodicalId":143992,"journal":{"name":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"77","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HUMANOIDS.2017.8239542","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 77

Abstract

In this paper we utilize the notion of affordances to model relations between task, object and a grasp to address the problem of task-specific robotic grasping. We use convolutional neural networks for encoding and detecting object affordances, class and orientation, which we utilize to formulate grasp constraints. Our approach applies to previously unseen objects from a fixed set of classes and facilitates reasoning about which tasks an object affords and how to grasp it for that task. We evaluate affordance detection on full-view and partial-view synthetic data and compute task-specific grasps for objects that belong to ten different classes and afford five different tasks. We demonstrate the feasibility of our approach by employing an optimization-based grasp planner to compute task-specific grasps.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用深度学习的特定任务抓取的可视性检测
在本文中,我们利用可视性的概念来建模任务、对象和抓取之间的关系,以解决特定任务的机器人抓取问题。我们使用卷积神经网络来编码和检测物体的可视性、类和方向,我们利用这些信息来制定抓取约束。我们的方法适用于一组固定的类中以前看不见的对象,并有助于推断对象提供哪些任务以及如何为该任务掌握它。我们评估了全视图和部分视图合成数据上的可视性检测,并计算了属于十个不同类别的对象的特定任务把握,并提供了五个不同的任务。我们通过使用基于优化的抓取计划器来计算特定任务的抓取来证明我们方法的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Stiffness evaluation of a tendon-driven robot with variable joint stiffness mechanisms Investigations of viscoelastic liquid cooled actuators applied for dynamic motion control of legged systems Tilt estimator for 3D non-rigid pendulum based on a tri-axial accelerometer and gyrometer Optimal and robust walking using intrinsic properties of a series-elastic robot Experimental evaluation of simple estimators for humanoid robots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1