从姿势到动作:利用姿势先验进行跨域运动重定位

IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Computer Graphics Forum Pub Date : 2024-10-09 DOI:10.1111/cgf.15170
Qingqing Zhao, Peizhuo Li, Wang Yifan, Sorkine-Hornung Olga, Gordon Wetzstein
{"title":"从姿势到动作:利用姿势先验进行跨域运动重定位","authors":"Qingqing Zhao,&nbsp;Peizhuo Li,&nbsp;Wang Yifan,&nbsp;Sorkine-Hornung Olga,&nbsp;Gordon Wetzstein","doi":"10.1111/cgf.15170","DOIUrl":null,"url":null,"abstract":"<div>\n <p>Creating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learning-based motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.</p>\n </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15170","citationCount":"0","resultStr":"{\"title\":\"Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior\",\"authors\":\"Qingqing Zhao,&nbsp;Peizhuo Li,&nbsp;Wang Yifan,&nbsp;Sorkine-Hornung Olga,&nbsp;Gordon Wetzstein\",\"doi\":\"10.1111/cgf.15170\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n <p>Creating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learning-based motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.</p>\\n </div>\",\"PeriodicalId\":10687,\"journal\":{\"name\":\"Computer Graphics Forum\",\"volume\":\"43 8\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15170\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Graphics Forum\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/cgf.15170\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Graphics Forum","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/cgf.15170","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

为各种角色创建可信的动作是计算机图形学的一个长期目标。目前基于学习的运动合成方法依赖于大规模运动数据集,而这些数据集通常很难获取,甚至根本无法获取。另一方面,姿势数据则更容易获取,因为静态姿势角色更容易创建,甚至可以利用最新的计算机视觉技术从图像中提取出来。在本文中,我们利用这一替代数据源,通过重定向引入了一种神经运动合成方法,该方法通过从一个单一的现有运动捕捉数据集转移另一个截然不同角色的运动,生成只有姿势数据的各种角色的可信运动。我们的实验表明,我们的方法有效地结合了源角色的运动特征和目标角色的姿势特征,并能在姿势数据集较小或有噪声的情况下稳健运行,包括从艺术家创建的几个姿势到直接从图像估算出的噪声姿势。此外,一项用户研究表明,大多数参与者认为我们的重定向动作观看起来更令人愉悦,外观更逼真,而且假象更少。请点击此处查看我们的代码和数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior

Creating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learning-based motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Graphics Forum
Computer Graphics Forum 工程技术-计算机:软件工程
CiteScore
5.80
自引率
12.00%
发文量
175
审稿时长
3-6 weeks
期刊介绍: Computer Graphics Forum is the official journal of Eurographics, published in cooperation with Wiley-Blackwell, and is a unique, international source of information for computer graphics professionals interested in graphics developments worldwide. It is now one of the leading journals for researchers, developers and users of computer graphics in both commercial and academic environments. The journal reports on the latest developments in the field throughout the world and covers all aspects of the theory, practice and application of computer graphics.
期刊最新文献
Front Matter DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition Front Matter LGSur-Net: A Local Gaussian Surface Representation Network for Upsampling Highly Sparse Point Cloud 𝒢-Style: Stylized Gaussian Splatting
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1