从视觉示范中学习一次性元模仿

Vishal Bhutani, A. Majumder, M. Vankadari, S. Dutta, Aaditya Asati, Swagat Kumar
{"title":"从视觉示范中学习一次性元模仿","authors":"Vishal Bhutani, A. Majumder, M. Vankadari, S. Dutta, Aaditya Asati, Swagat Kumar","doi":"10.1109/icra46639.2022.9812281","DOIUrl":null,"url":null,"abstract":"The ability to apply a previously-learned skill (e.g., pushing) to a new task (context or object) is an important requirement for new-age robots. An attempt is made to solve this problem in this paper by proposing a deep meta-imitation learning framework comprising of an attentive-embedding net-work and a control network, capable of learning a new task in an end-to-end manner while requiring only one or a few visual demonstrations. The feature embeddings learnt by incorporating spatial attention is shown to provide higher embedding and control accuracy compared to other state-of-the-art methods such as TecNet [7] and MIL [4]. The interaction between the embedding and the control networks is improved by using multiplicative skip-connections and is shown to overcome the overfitting of the trained model. The superiority of the proposed model is established through rigorous experimentation using a publicly available dataset and a new dataset created using PyBullet [36]. Several ablation studies have been carried out to justify the design choices.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attentive One-Shot Meta-Imitation Learning From Visual Demonstration\",\"authors\":\"Vishal Bhutani, A. Majumder, M. Vankadari, S. Dutta, Aaditya Asati, Swagat Kumar\",\"doi\":\"10.1109/icra46639.2022.9812281\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ability to apply a previously-learned skill (e.g., pushing) to a new task (context or object) is an important requirement for new-age robots. An attempt is made to solve this problem in this paper by proposing a deep meta-imitation learning framework comprising of an attentive-embedding net-work and a control network, capable of learning a new task in an end-to-end manner while requiring only one or a few visual demonstrations. The feature embeddings learnt by incorporating spatial attention is shown to provide higher embedding and control accuracy compared to other state-of-the-art methods such as TecNet [7] and MIL [4]. The interaction between the embedding and the control networks is improved by using multiplicative skip-connections and is shown to overcome the overfitting of the trained model. The superiority of the proposed model is established through rigorous experimentation using a publicly available dataset and a new dataset created using PyBullet [36]. Several ablation studies have been carried out to justify the design choices.\",\"PeriodicalId\":341244,\"journal\":{\"name\":\"2022 International Conference on Robotics and Automation (ICRA)\",\"volume\":\"77 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icra46639.2022.9812281\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9812281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

将以前学过的技能(例如,推动)应用于新任务(上下文或对象)的能力是新时代机器人的重要要求。为了解决这个问题,本文提出了一个由注意力嵌入网络和控制网络组成的深度元模仿学习框架,该框架能够以端到端方式学习新任务,而只需要一个或几个视觉演示。与其他最先进的方法(如TecNet[7]和MIL[4])相比,通过结合空间注意力学习的特征嵌入可以提供更高的嵌入和控制精度。通过使用乘法跳跃连接改善了嵌入网络和控制网络之间的相互作用,并克服了训练模型的过拟合。所提出模型的优越性是通过使用公开可用的数据集和使用PyBullet创建的新数据集进行严格的实验来建立的[36]。已经进行了几项烧蚀研究来证明设计选择的合理性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Attentive One-Shot Meta-Imitation Learning From Visual Demonstration
The ability to apply a previously-learned skill (e.g., pushing) to a new task (context or object) is an important requirement for new-age robots. An attempt is made to solve this problem in this paper by proposing a deep meta-imitation learning framework comprising of an attentive-embedding net-work and a control network, capable of learning a new task in an end-to-end manner while requiring only one or a few visual demonstrations. The feature embeddings learnt by incorporating spatial attention is shown to provide higher embedding and control accuracy compared to other state-of-the-art methods such as TecNet [7] and MIL [4]. The interaction between the embedding and the control networks is improved by using multiplicative skip-connections and is shown to overcome the overfitting of the trained model. The superiority of the proposed model is established through rigorous experimentation using a publicly available dataset and a new dataset created using PyBullet [36]. Several ablation studies have been carried out to justify the design choices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Can your drone touch? Exploring the boundaries of consumer-grade multirotors for physical interaction Underwater Dock Detection through Convolutional Neural Networks Trained with Artificial Image Generation Immersive Virtual Walking System Using an Avatar Robot R2poweR: The Proof-of-Concept of a Backdrivable, High-Ratio Gearbox for Human-Robot Collaboration* Cityscapes TL++: Semantic Traffic Light Annotations for the Cityscapes Dataset
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1