Sensorimotor Cross-Behavior Knowledge Transfer for Grounded Category Recognition

Gyan Tatiya, Ramtin Hosseini, M. C. Hughes, J. Sinapov
{"title":"Sensorimotor Cross-Behavior Knowledge Transfer for Grounded Category Recognition","authors":"Gyan Tatiya, Ramtin Hosseini, M. C. Hughes, J. Sinapov","doi":"10.1109/DEVLRN.2019.8850715","DOIUrl":null,"url":null,"abstract":"Humans use exploratory behaviors coupled with multi-modal perception to learn about the objects around them. Research in robotics has shown that robots too can use such behaviors (e.g., grasping, pushing, shaking) to infer object properties that cannot always be detected using visual input alone. However, such learned representations are specific to each individual robot and cannot be directly transferred to another robot with different actions, sensors, and morphology. To address this challenge, we propose a framework for knowledge transfer across different behaviors and modalities that enables a source robot to transfer knowledge about objects to a target robot that has never interacted with them. The intuition behind our approach is that if two robots interact with a shared set of objects, the produced sensory data can be used to learn a mapping between the two robots' feature spaces. We evaluate the framework on a category recognition task using a dataset containing 9 robot behaviors performed multiple times on a set of 100 objects. The results show that the proposed framework can enable a target robot to perform category recognition on a set of novel objects and categories without the need to physically interact with the objects to learn the categorization model.","PeriodicalId":318973,"journal":{"name":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2019.8850715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Humans use exploratory behaviors coupled with multi-modal perception to learn about the objects around them. Research in robotics has shown that robots too can use such behaviors (e.g., grasping, pushing, shaking) to infer object properties that cannot always be detected using visual input alone. However, such learned representations are specific to each individual robot and cannot be directly transferred to another robot with different actions, sensors, and morphology. To address this challenge, we propose a framework for knowledge transfer across different behaviors and modalities that enables a source robot to transfer knowledge about objects to a target robot that has never interacted with them. The intuition behind our approach is that if two robots interact with a shared set of objects, the produced sensory data can be used to learn a mapping between the two robots' feature spaces. We evaluate the framework on a category recognition task using a dataset containing 9 robot behaviors performed multiple times on a set of 100 objects. The results show that the proposed framework can enable a target robot to perform category recognition on a set of novel objects and categories without the need to physically interact with the objects to learn the categorization model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于类别识别的感觉运动跨行为知识转移
人类使用探索性行为和多模态感知来了解周围的物体。机器人技术的研究表明,机器人也可以使用这些行为(例如,抓取,推动,摇晃)来推断物体的属性,而这些属性并不总是单独使用视觉输入来检测的。然而,这种学习表征是特定于每个机器人的,不能直接转移到具有不同动作、传感器和形态的另一个机器人上。为了应对这一挑战,我们提出了一个跨不同行为和模式的知识转移框架,使源机器人能够将关于物体的知识转移到从未与它们交互过的目标机器人。我们的方法背后的直觉是,如果两个机器人与一组共享的物体相互作用,产生的感官数据可以用来学习两个机器人特征空间之间的映射。我们使用包含在100个对象上多次执行的9个机器人行为的数据集来评估类别识别任务上的框架。结果表明,所提出的框架可以使目标机器人在一组新的对象和类别上进行类别识别,而无需与对象进行物理交互来学习分类模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Training-ValueNet: Data Driven Label Noise Cleaning on Weakly-Supervised Web Images Learning to Parse Grounded Language using Reservoir Computing Identifying Reusable Early-Life Options New evidence for learning-based accounts of gaze following: Testing a robotic prediction Online Associative Multi-Stage Goal Babbling Toward Versatile Learning of Sensorimotor Skills
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1