Human-inspired Video Imitation Learning on Humanoid Model

Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo
{"title":"Human-inspired Video Imitation Learning on Humanoid Model","authors":"Chun Hei Lee, Nicole Chee Lin Yueh, K. Woo","doi":"10.1109/IRC55401.2022.00068","DOIUrl":null,"url":null,"abstract":"Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.","PeriodicalId":282759,"journal":{"name":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Sixth IEEE International Conference on Robotic Computing (IRC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IRC55401.2022.00068","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于人形模型的仿人视频学习
为双足机器人产生良好的类似人类的运动或其他腿部运动一直是一个挑战。应对这一挑战的新兴解决方案之一是使用模仿学习。模仿的来源大多是状态演示,因此使用最先进的生成对抗模仿学习(GAIL)和观察模仿(IfO)能力将是解决这一问题的理想框架。然而,通常很难允许新的或复杂的运动,因为这些框架的公共源要么设置成本高昂,要么由于精度问题而难以在没有计算成本高昂的预处理的情况下产生令人满意的结果。受人们在掌握特定学科的基本知识后学习高级知识的启发,本文提出了一种基于对抗运动先验(Adversarial Motion prior, AMP)的运动捕捉辅助视频模仿(MoVI)学习框架,将行走等主要动作的运动捕捉数据与跑步等目标动作的视频片段相结合,以产生流畅、自然的目标动作模仿结果。这个框架能够产生各种类似人类的运动,通过采取最常见和丰富的运动捕捉数据与任何运动的视频剪辑,而不需要昂贵的数据集或复杂的预处理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Improved Approach to 6D Object Pose Tracking in Fast Motion Scenarios Mechanical Exploration of the Design of Tactile Fingertips via Finite Element Analysis Generating Robot-Dependent Cost Maps for Off-Road Environments Using Locomotion Experiments and Earth Observation Data* Tracking Visual Landmarks of Opportunity as Rally Points for Unmanned Ground Vehicles Experimental Assessment of Feature-based Lidar Odometry and Mapping
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1