Deep fusible skinning of animation sequences

Anastasia Moutafidou, Vasileios Toulatzis, Ioannis Fudos
{"title":"Deep fusible skinning of animation sequences","authors":"Anastasia Moutafidou, Vasileios Toulatzis, Ioannis Fudos","doi":"10.1007/s00371-023-03130-3","DOIUrl":null,"url":null,"abstract":"Abstract Animation compression is a key process in replicating and streaming animated 3D models. Linear Blend Skinning (LBS) facilitates the compression of an animated sequence while maintaining the capability of real-time streaming by deriving vertex to proxy bone assignments and per frame bone transformations. We introduce a innovative deep learning approach that learns how to assign vertices to proxy bones with persistent labeling. This is accomplished by learning how to correlate vertex trajectories to bones of fully rigged animated 3D models. Our method uses these pretrained networks on dynamic characteristics (vertex trajectories) of an unseen animation sequence (a sequence of meshes without skeleton or rigging information) to derive an LBS scheme that outperforms most previous competent approaches by offering better approximation of the original animation sequence with fewer bones, therefore offering better compression and smaller bandwidth requirements for streaming. This is substantiated by a thorough comparative performance evaluation using several error metrics, and compression/bandwidth measurements. In this paper, we have also introduced a persistent bone labeling scheme that (i) improves the efficiency of our method in terms of lower error values and better visual outcome and (ii) facilitates the fusion of two (or more) LBS schemes by an innovative algorithm that combines two arbitrary LBS schemes. To demonstrate the usefulness and potential of this fusion process, we have combined the outcome of our deep skinning method with that of Rignet—which is a state-of-the-art method that performs rigging on static meshes—with impressive results.","PeriodicalId":227044,"journal":{"name":"The Visual Computer","volume":"9 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-023-03130-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Animation compression is a key process in replicating and streaming animated 3D models. Linear Blend Skinning (LBS) facilitates the compression of an animated sequence while maintaining the capability of real-time streaming by deriving vertex to proxy bone assignments and per frame bone transformations. We introduce a innovative deep learning approach that learns how to assign vertices to proxy bones with persistent labeling. This is accomplished by learning how to correlate vertex trajectories to bones of fully rigged animated 3D models. Our method uses these pretrained networks on dynamic characteristics (vertex trajectories) of an unseen animation sequence (a sequence of meshes without skeleton or rigging information) to derive an LBS scheme that outperforms most previous competent approaches by offering better approximation of the original animation sequence with fewer bones, therefore offering better compression and smaller bandwidth requirements for streaming. This is substantiated by a thorough comparative performance evaluation using several error metrics, and compression/bandwidth measurements. In this paper, we have also introduced a persistent bone labeling scheme that (i) improves the efficiency of our method in terms of lower error values and better visual outcome and (ii) facilitates the fusion of two (or more) LBS schemes by an innovative algorithm that combines two arbitrary LBS schemes. To demonstrate the usefulness and potential of this fusion process, we have combined the outcome of our deep skinning method with that of Rignet—which is a state-of-the-art method that performs rigging on static meshes—with impressive results.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
动画序列的深度可熔蒙皮
动画压缩是三维动画模型复制和流化的关键过程。线性混合蒙皮(LBS)简化了动画序列的压缩,同时通过导出顶点到代理骨骼分配和每帧骨骼转换来保持实时流的能力。我们引入了一种创新的深度学习方法,该方法学习如何将顶点分配给具有持久标记的代理骨骼。这是通过学习如何将顶点轨迹关联到完全操纵的动画3D模型的骨骼来完成的。我们的方法使用这些预训练的网络对一个看不见的动画序列(一个没有骨架或索具信息的网格序列)的动态特征(顶点轨迹)进行训练,得出一个LBS方案,通过提供更少骨骼的原始动画序列的更好近似,从而提供更好的压缩和更小的流带宽要求,从而优于大多数以前的有效方法。这是通过使用几个误差度量和压缩/带宽测量的全面比较性能评估来证实的。在本文中,我们还介绍了一种持久的骨骼标记方案,该方案(i)在更低的误差值和更好的视觉结果方面提高了我们方法的效率,(ii)通过结合两种任意LBS方案的创新算法促进了两种(或更多)LBS方案的融合。为了展示这种融合过程的有用性和潜力,我们将我们的深度蒙皮方法的结果与rignet相结合,这是一种最先进的方法,在静态网格上执行索具,结果令人印象深刻。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Point cloud downsampling based on the transformer features Dual-branch dilated context convolutional for table detection transformer in the document images 3D-Scene-Former: 3D scene generation from a single RGB image using Transformers Correction: DC-PSENet: a novel scene text detection method integrating double ResNet-based and changed channels recursive feature pyramid Correction: Digital human and embodied intelligence for sports science: advancements, opportunities and prospects
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1