首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Strongly Coupled Simulation of Magnetic Rigid Bodies 磁性刚体的强耦合模拟
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-17 DOI: 10.1111/cgf.15185
L. Westhofen, J. A. Fernández-Fernández, S. R. Jeske, J. Bender

We present a strongly coupled method for the robust simulation of linear magnetic rigid bodies. Our approach describes the magnetic effects as part of an incremental potential function. This potential is inserted into the reformulation of the equations of motion for rigid bodies as an optimization problem. For handling collision and friction, we lean on the Incremental Potential Contact (IPC) method. Furthermore, we provide a novel, hybrid explicit / implicit time integration scheme for the magnetic potential based on a distance criterion. This reduces the fill-in of the energy Hessian in cases where the change in magnetic potential energy is small, leading to a simulation speedup without compromising the stability of the system. The resulting system yields a strongly coupled method for the robust simulation of magnetic effects. We showcase the robustness in theory by analyzing the behavior of the magnetic attraction against the contact resolution. Furthermore, we display stability in practice by simulating exceedingly strong and arbitrarily shaped magnets. The results are free of artifacts like bouncing for time step sizes larger than with the equivalent weakly coupled approach. Finally, we showcase the utility of our method in different scenarios with complex joints and numerous magnets.

我们提出了一种强耦合方法,用于线性磁刚体的鲁棒模拟。我们的方法将磁效应描述为增量势函数的一部分。该势函数作为优化问题被插入刚体运动方程的重构中。为了处理碰撞和摩擦,我们采用了增量势接触(IPC)方法。此外,我们还根据距离准则为磁势提供了一种新颖的显式/隐式混合时间积分方案。在磁势能量变化较小的情况下,这减少了能量赫塞斯的填充,从而在不影响系统稳定性的情况下加快了模拟速度。由此产生的系统为磁效应的稳健模拟提供了一种强耦合方法。我们通过分析磁吸引力对接触分辨率的影响,展示了理论上的稳健性。此外,我们还通过模拟超强和任意形状的磁体,展示了实践中的稳定性。在时间步长大于等效弱耦合方法的情况下,结果不会出现反弹等假象。最后,我们展示了我们的方法在具有复杂关节和众多磁体的不同场景中的实用性。
{"title":"Strongly Coupled Simulation of Magnetic Rigid Bodies","authors":"L. Westhofen,&nbsp;J. A. Fernández-Fernández,&nbsp;S. R. Jeske,&nbsp;J. Bender","doi":"10.1111/cgf.15185","DOIUrl":"https://doi.org/10.1111/cgf.15185","url":null,"abstract":"<div>\u0000 <p>We present a strongly coupled method for the robust simulation of linear magnetic rigid bodies. Our approach describes the magnetic effects as part of an incremental potential function. This potential is inserted into the reformulation of the equations of motion for rigid bodies as an optimization problem. For handling collision and friction, we lean on the Incremental Potential Contact (IPC) method. Furthermore, we provide a novel, hybrid explicit / implicit time integration scheme for the magnetic potential based on a distance criterion. This reduces the fill-in of the energy Hessian in cases where the change in magnetic potential energy is small, leading to a simulation speedup without compromising the stability of the system. The resulting system yields a strongly coupled method for the robust simulation of magnetic effects. We showcase the robustness in theory by analyzing the behavior of the magnetic attraction against the contact resolution. Furthermore, we display stability in practice by simulating exceedingly strong and arbitrarily shaped magnets. The results are free of artifacts like bouncing for time step sizes larger than with the equivalent weakly coupled approach. Finally, we showcase the utility of our method in different scenarios with complex joints and numerous magnets.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curved Three-Director Cosserat Shells with Strong Coupling 具有强耦合的曲面三导科瑟拉壳
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-17 DOI: 10.1111/cgf.15183
F. Löschner, J. A. Fernández-Fernández, S. R. Jeske, J. Bender

Continuum-based shell models are an established approach for the simulation of thin deformables in computer graphics. However, existing research in physically-based animation is mostly focused on shear-rigid Kirchhoff-Love shells. In this work we explore three-director Cosserat (micropolar) shells which introduce additional rotational degrees of freedom. This microrotation field models transverse shearing and in-plane drilling rotations. We propose an incremental potential formulation of the Cosserat shell dynamics which allows for strong coupling with frictional contact and other physical systems. We evaluate a corresponding finite element discretization for non-planar shells using second-order elements which alleviates shear-locking and permits simulation of curved geometries. Our formulation and the discretization, in particular of the rotational degrees of freedom, is designed to integrate well with typical simulation approaches in physically-based animation. While the discretization of the rotations requires some care, we demonstrate that they do not pose significant numerical challenges in Newton's method. In our experiments we also show that the codimensional shell model is consistent with the respective three-dimensional model. We qualitatively compare our formulation with Kirchhoff-Love shells and demonstrate intriguing use cases for the additional modes of control over dynamic deformations offered by the Cosserat model such as directly prescribing rotations or angular velocities and influencing the shell's curvature.

基于连续介质的壳模型是计算机制图中模拟薄变形体的一种成熟方法。然而,现有的基于物理的动画研究大多集中在剪切刚性的基尔霍夫-洛夫壳上。在这项工作中,我们探索了引入额外旋转自由度的三向导 Cosserat(微旋转)壳。这种微旋转场可模拟横向剪切和平面内钻孔旋转。我们提出了 Cosserat 壳体动力学的增量势公式,允许与摩擦接触和其他物理系统进行强耦合。我们使用二阶元素对非平面壳体的相应有限元离散化进行了评估,该离散化减轻了剪切锁定,并允许对曲面几何进行模拟。我们的表述和离散化,尤其是旋转自由度的离散化,旨在与物理动画中的典型模拟方法很好地结合。虽然旋转的离散化需要一定的小心谨慎,但我们证明,在牛顿方法中,旋转并不构成重大的数值挑战。在实验中,我们还证明了二维外壳模型与相应的三维模型是一致的。我们定性比较了我们与 Kirchhoff-Love 壳体的表述,并展示了 Cosserat 模型提供的额外动态变形控制模式的有趣用例,如直接规定旋转或角速度以及影响壳体曲率。
{"title":"Curved Three-Director Cosserat Shells with Strong Coupling","authors":"F. Löschner,&nbsp;J. A. Fernández-Fernández,&nbsp;S. R. Jeske,&nbsp;J. Bender","doi":"10.1111/cgf.15183","DOIUrl":"https://doi.org/10.1111/cgf.15183","url":null,"abstract":"<div>\u0000 <p>Continuum-based shell models are an established approach for the simulation of thin deformables in computer graphics. However, existing research in physically-based animation is mostly focused on shear-rigid Kirchhoff-Love shells. In this work we explore three-director Cosserat (micropolar) shells which introduce additional rotational degrees of freedom. This microrotation field models transverse shearing and in-plane drilling rotations. We propose an incremental potential formulation of the Cosserat shell dynamics which allows for strong coupling with frictional contact and other physical systems. We evaluate a corresponding finite element discretization for non-planar shells using second-order elements which alleviates shear-locking and permits simulation of curved geometries. Our formulation and the discretization, in particular of the rotational degrees of freedom, is designed to integrate well with typical simulation approaches in physically-based animation. While the discretization of the rotations requires some care, we demonstrate that they do not pose significant numerical challenges in Newton's method. In our experiments we also show that the codimensional shell model is consistent with the respective three-dimensional model. We qualitatively compare our formulation with Kirchhoff-Love shells and demonstrate intriguing use cases for the additional modes of control over dynamic deformations offered by the Cosserat model such as directly prescribing rotations or angular velocities and influencing the shell's curvature.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Flight Summaries Conforming to Cinematographic Principles 生成符合电影拍摄原则的飞行摘要
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-17 DOI: 10.1111/cgf.15179
Christophe Lino, Marie-Paule Cani

We propose an automatic method for generating flight summaries of prescribed duration, given any planed 3D trajectory of a flying object. The challenge is to select relevant time-ellipses, while keeping and adequately framing the most interesting parts of the trajectory, and enforcing cinematographic rules between the selected shots. Our solution optimizes the visual quality of the output video both in terms of camera view and film editing choices, thanks to a new optimization technique, designed to jointly optimize the selection of the interesting parts of a flight, and the camera animation parameters over time. To our best knowledge, this solution is the first one to address camera control, film editing, and trajectory summarizing at once. Ablation studies demonstrate the visual quality of the flights summaries we generate compared to alternative methods.

我们提出了一种自动生成规定时长飞行摘要的方法,可以给定任何飞行物体的三维规划轨迹。我们面临的挑战是如何选择相关的时间片段,同时保留并充分定格轨迹中最有趣的部分,并在所选镜头之间执行电影规则。我们的解决方案在摄像机视角和电影编辑选择方面优化了输出视频的视觉质量,这要归功于一种新的优化技术,该技术旨在共同优化飞行过程中有趣部分的选择和摄像机随时间变化的动画参数。据我们所知,这是第一个同时解决摄像机控制、影片编辑和轨迹总结的解决方案。消融研究表明,与其他方法相比,我们所生成的飞行轨迹总结具有更高的视觉质量。
{"title":"Generating Flight Summaries Conforming to Cinematographic Principles","authors":"Christophe Lino,&nbsp;Marie-Paule Cani","doi":"10.1111/cgf.15179","DOIUrl":"https://doi.org/10.1111/cgf.15179","url":null,"abstract":"<div>\u0000 \u0000 <p>We propose an automatic method for generating flight summaries of prescribed duration, given any planed 3D trajectory of a flying object. The challenge is to select relevant time-ellipses, while keeping and adequately framing the most interesting parts of the trajectory, and enforcing cinematographic rules between the selected shots. Our solution optimizes the visual quality of the output video both in terms of camera view and film editing choices, thanks to a new optimization technique, designed to jointly optimize the selection of the interesting parts of a flight, and the camera animation parameters over time. To our best knowledge, this solution is the first one to address camera control, film editing, and trajectory summarizing at once. Ablation studies demonstrate the visual quality of the flights summaries we generate compared to alternative methods.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15179","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and Artefact-Free Deformable Contact with Smooth Surface Representations 采用平滑表面表示的稳健且无伪影的可变形接触
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-17 DOI: 10.1111/cgf.15187
Y. Du, Y. Li, S. Coros, B. Thomaszewski

Modeling contact between deformable solids is a fundamental problem in computer animation, mechanical design, and robotics. Existing methods based on C0-discretizations—piece-wise linear or polynomial surfaces—suffer from discontinuities and irregularities in tangential contact forces, which can significantly affect simulation outcomes and even prevent convergence. In this work, we show that these limitations can be overcome with a smooth surface representation based on Implicit Moving Least Squares (IMLS). In particular, we propose a self collision detection scheme tailored to IMLS surfaces that enables robust and efficient handling of challenging self contacts. Through a series of test cases, we show that our approach offers advantages over existing methods in terms of accuracy and robustness for both forward and inverse problems.

模拟可变形固体之间的接触是计算机动画、机械设计和机器人技术中的一个基本问题。现有的基于 C0-离散化(片断线性或多项式曲面)的方法会受到切向接触力的不连续性和不规则性的影响,从而严重影响模拟结果,甚至无法收敛。在这项工作中,我们展示了基于隐式移动最小二乘法(IMLS)的光滑表面表示法可以克服这些限制。特别是,我们提出了一种为 IMLS 曲面量身定制的自碰撞检测方案,能够稳健、高效地处理具有挑战性的自接触。通过一系列测试案例,我们证明了我们的方法在正向和反向问题的准确性和稳健性方面都优于现有方法。
{"title":"Robust and Artefact-Free Deformable Contact with Smooth Surface Representations","authors":"Y. Du,&nbsp;Y. Li,&nbsp;S. Coros,&nbsp;B. Thomaszewski","doi":"10.1111/cgf.15187","DOIUrl":"https://doi.org/10.1111/cgf.15187","url":null,"abstract":"<div>\u0000 <p>Modeling contact between deformable solids is a fundamental problem in computer animation, mechanical design, and robotics. Existing methods based on C<sup>0</sup>-discretizations—piece-wise linear or polynomial surfaces—suffer from discontinuities and irregularities in tangential contact forces, which can significantly affect simulation outcomes and even prevent convergence. In this work, we show that these limitations can be overcome with a smooth surface representation based on Implicit Moving Least Squares (IMLS). In particular, we propose a self collision detection scheme tailored to IMLS surfaces that enables robust and efficient handling of challenging self contacts. Through a series of test cases, we show that our approach offers advantages over existing methods in terms of accuracy and robustness for both forward and inverse problems.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-based Human Motion Style Transfer with Semantic Guidance 基于扩散的人体运动风格转移与语义指导
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15169
Lei Hu, Zihao Zhang, Yongjing Ye, Yiwen Xu, Shihong Xia

3D Human motion style transfer is a fundamental problem in computer graphic and animation processing. Existing AdaIN-based methods necessitate datasets with balanced style distribution and content/style labels to train the clustered latent space. However, we may encounter a single unseen style example in practical scenarios, but not in sufficient quantity to constitute a style cluster for AdaIN-based methods. Therefore, in this paper, we propose a novel two-stage framework for few-shot style transfer learning based on the diffusion model. Specifically, in the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior so that it can cope with various content motion inputs. In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer. The key idea is regarding the reverse process of diffusion as a motion-style translation process since the motion styles can be viewed as special motion variations. During the fine-tuning for style transfer, a simple yet effective semantic-guided style transfer loss coordinated with style example reconstruction loss is introduced to supervise the style transfer in CLIP semantic space. The qualitative and quantitative evaluations demonstrate that our method can achieve state-of-the-art performance and has practical applications. The source code is available at https://github.com/hlcdyy/diffusion-based-motion-style-transfer.

三维人体运动风格转换是计算机图形和动画处理中的一个基本问题。现有的基于 AdaIN 的方法需要具有均衡风格分布和内容/风格标签的数据集来训练聚类潜在空间。然而,在实际场景中,我们可能会遇到单一的未见风格示例,但其数量不足以构成基于 AdaIN 方法的风格聚类。因此,在本文中,我们提出了一种基于扩散模型的两阶段式风格迁移学习框架。具体来说,在第一阶段,我们预先训练一个基于扩散的文本到运动模型作为生成先验,使其能够应对各种内容运动输入。在第二阶段,基于单一风格示例,我们对预训练的扩散模型进行微调,使其能够进行风格转换。关键的思路是将扩散的反向过程视为运动风格的转换过程,因为运动风格可以被视为特殊的运动变化。在风格转换的微调过程中,引入了一种简单而有效的语义指导风格转换损失,该损失与风格示例重构损失相协调,用于监督 CLIP 语义空间中的风格转换。定性和定量评估表明,我们的方法可以达到最先进的性能,并具有实际应用价值。源代码见 https://github.com/hlcdyy/diffusion-based-motion-style-transfer。
{"title":"Diffusion-based Human Motion Style Transfer with Semantic Guidance","authors":"Lei Hu,&nbsp;Zihao Zhang,&nbsp;Yongjing Ye,&nbsp;Yiwen Xu,&nbsp;Shihong Xia","doi":"10.1111/cgf.15169","DOIUrl":"https://doi.org/10.1111/cgf.15169","url":null,"abstract":"<p>3D Human motion style transfer is a fundamental problem in computer graphic and animation processing. Existing AdaIN-based methods necessitate datasets with balanced style distribution and content/style labels to train the clustered latent space. However, we may encounter a single unseen style example in practical scenarios, but not in sufficient quantity to constitute a style cluster for AdaIN-based methods. Therefore, in this paper, we propose a novel two-stage framework for few-shot style transfer learning based on the diffusion model. Specifically, in the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior so that it can cope with various content motion inputs. In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer. The key idea is regarding the reverse process of diffusion as a motion-style translation process since the motion styles can be viewed as special motion variations. During the fine-tuning for style transfer, a simple yet effective semantic-guided style transfer loss coordinated with style example reconstruction loss is introduced to supervise the style transfer in CLIP semantic space. The qualitative and quantitative evaluations demonstrate that our method can achieve state-of-the-art performance and has practical applications. The source code is available at https://github.com/hlcdyy/diffusion-based-motion-style-transfer.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiphase Viscoelastic Non-Newtonian Fluid Simulation 多相粘弹性非牛顿流体模拟
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15180
Y. Zhang, S. Long, Y. Xu, X. Wang, C. Yao, J. Kosinka, S. Frey, A. Telea, X. Ban

We propose an SPH-based method for simulating viscoelastic non-Newtonian fluids within a multiphase framework. For this, we use mixture models to handle component transport and conformation tensor methods to handle the fluid's viscoelastic stresses. In addition, we consider a bonding effects network to handle the impact of microscopic chemical bonds on phase transport. Our method supports the simulation of both steady-state viscoelastic fluids and discontinuous shear behavior. Compared to previous work on single-phase viscous non-Newtonian fluids, our method can capture more complex behavior, including material mixing processes that generate non-Newtonian fluids. We adopt a uniform set of variables to describe shear thinning, shear thickening, and ordinary Newtonian fluids while automatically calculating local rheology in inhomogeneous solutions. In addition, our method can simulate large viscosity ranges under explicit integration schemes, which typically requires implicit viscosity solvers under earlier single-phase frameworks.

我们提出了一种基于 SPH 的方法,用于在多相框架内模拟粘弹性非牛顿流体。为此,我们使用混合模型来处理组分传输,并使用构象张量方法来处理流体的粘弹性应力。此外,我们还考虑了键效应网络,以处理微观化学键对相传输的影响。我们的方法支持稳态粘弹性流体和非连续剪切行为的模拟。与之前针对单相粘性非牛顿流体的研究相比,我们的方法可以捕捉到更复杂的行为,包括产生非牛顿流体的材料混合过程。我们采用一组统一的变量来描述剪切稀化、剪切增稠和普通牛顿流体,同时自动计算非均质溶液中的局部流变。此外,我们的方法可以在显式积分方案下模拟较大的粘度范围,而在早期的单相框架下,这通常需要隐式粘度求解器。
{"title":"Multiphase Viscoelastic Non-Newtonian Fluid Simulation","authors":"Y. Zhang,&nbsp;S. Long,&nbsp;Y. Xu,&nbsp;X. Wang,&nbsp;C. Yao,&nbsp;J. Kosinka,&nbsp;S. Frey,&nbsp;A. Telea,&nbsp;X. Ban","doi":"10.1111/cgf.15180","DOIUrl":"https://doi.org/10.1111/cgf.15180","url":null,"abstract":"<p>We propose an SPH-based method for simulating viscoelastic non-Newtonian fluids within a multiphase framework. For this, we use mixture models to handle component transport and conformation tensor methods to handle the fluid's viscoelastic stresses. In addition, we consider a bonding effects network to handle the impact of microscopic chemical bonds on phase transport. Our method supports the simulation of both steady-state viscoelastic fluids and discontinuous shear behavior. Compared to previous work on single-phase viscous non-Newtonian fluids, our method can capture more complex behavior, including material mixing processes that generate non-Newtonian fluids. We adopt a uniform set of variables to describe shear thinning, shear thickening, and ordinary Newtonian fluids while automatically calculating local rheology in inhomogeneous solutions. In addition, our method can simulate large viscosity ranges under explicit integration schemes, which typically requires implicit viscosity solvers under earlier single-phase frameworks.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Play Guitar with Robotic Hands 用机器手学习弹吉他
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15166
Chaoyi Luo, Pengbin Tang, Yuqi Ma, Dongjin Huang

Playing the guitar is a dexterous human skill that poses significant challenges in computer graphics and robotics due to the precision required in finger positioning and coordination between hands. Current methods often rely on motion capture data to replicate specific guitar playing segments, which restricts the range of performances and demands intricate post-processing. In this paper, we introduce a novel reinforcement learning model that can play the guitar using robotic hands, without the need for motion capture datasets, from input tablatures. To achieve this, we divide the simulation task for playing guitar into three stages. (a): for an input tablature, we first generate corresponding fingerings that align with human habits. (b): based on the generated fingerings as the guidance, we train a neural network for controlling the fingers of the left hand using deep reinforcement learning, and (c): we generate plucking movements for the right hand based on inverse kinematics according to the tablature. We evaluate our method by employing precision, recall, and F1 scores as quantitative metrics to thoroughly assess its performance in playing musical notes. In addition, we conduct qualitative analysis through user studies to evaluate the visual and auditory effects of guitar performance. The results demonstrate that our model excels in playing most moderately difficult and easier musical pieces, accurately playing nearly all notes.

弹吉他是一项灵巧的人类技能,由于手指定位和双手协调的精确性要求,它给计算机图形学和机器人学带来了巨大挑战。目前的方法通常依赖动作捕捉数据来复制特定的吉他弹奏片段,这限制了表演范围,并要求复杂的后期处理。在本文中,我们介绍了一种新颖的强化学习模型,该模型无需动作捕捉数据集,即可通过输入的乐谱使用机械手弹奏吉他。为此,我们将弹吉他的模拟任务分为三个阶段。(a): 对于输入的乐谱,我们首先生成符合人类习惯的相应指法。(b):以生成的指法为指导,我们利用深度强化学习训练神经网络来控制左手的手指;(c):根据制表符,我们基于逆运动学为右手生成拨弦动作。我们采用精确度、召回率和 F1 分数作为定量指标来评估我们的方法,以全面评估其在弹奏音符方面的性能。此外,我们还通过用户研究进行了定性分析,以评估吉他演奏的视觉和听觉效果。结果表明,我们的模型在弹奏大多数中等难度和较简单的音乐作品时表现出色,几乎能准确弹奏出所有音符。
{"title":"Learning to Play Guitar with Robotic Hands","authors":"Chaoyi Luo,&nbsp;Pengbin Tang,&nbsp;Yuqi Ma,&nbsp;Dongjin Huang","doi":"10.1111/cgf.15166","DOIUrl":"https://doi.org/10.1111/cgf.15166","url":null,"abstract":"<p>Playing the guitar is a dexterous human skill that poses significant challenges in computer graphics and robotics due to the precision required in finger positioning and coordination between hands. Current methods often rely on motion capture data to replicate specific guitar playing segments, which restricts the range of performances and demands intricate post-processing. In this paper, we introduce a novel reinforcement learning model that can play the guitar using robotic hands, without the need for motion capture datasets, from input tablatures. To achieve this, we divide the simulation task for playing guitar into three stages. (a): for an input tablature, we first generate corresponding fingerings that align with human habits. (b): based on the generated fingerings as the guidance, we train a neural network for controlling the fingers of the left hand using deep reinforcement learning, and (c): we generate plucking movements for the right hand based on inverse kinematics according to the tablature. We evaluate our method by employing precision, recall, and F1 scores as quantitative metrics to thoroughly assess its performance in playing musical notes. In addition, we conduct qualitative analysis through user studies to evaluate the visual and auditory effects of guitar performance. The results demonstrate that our model excels in playing most moderately difficult and easier musical pieces, accurately playing nearly all notes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SketchAnim: Real-time sketch animation transfer from videos SketchAnim:从视频实时传输草图动画
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15176
Gaurav Rai, Shreyas Gupta, Ojaswa Sharma

Animation of hand-drawn sketches is an adorable art. It allows the animator to generate animations with expressive freedom and requires significant expertise. In this work, we introduce a novel sketch animation framework designed to address inherent challenges, such as motion extraction, motion transfer, and occlusion. The framework takes an exemplar video input featuring a moving object and utilizes a robust motion transfer technique to animate the input sketch. We show comparative evaluations that demonstrate the superior performance of our method over existing sketch animation techniques. Notably, our approach exhibits a higher level of user accessibility in contrast to conventional sketch-based animation systems, positioning it as a promising contributor to the field of sketch animation. https://graphics-research-group.github.io/SketchAnim/

手绘草图动画是一门可爱的艺术。它允许动画制作者自由生成动画,但需要大量的专业知识。在这项工作中,我们引入了一个新颖的草图动画框架,旨在解决运动提取、运动转移和遮挡等固有难题。该框架采用以移动物体为特征的示例视频输入,并利用稳健的运动转移技术为输入草图制作动画。我们进行了比较评估,证明我们的方法比现有的草图动画技术性能更优越。值得注意的是,与传统的基于草图的动画系统相比,我们的方法显示出更高的用户可访问性,使其在草图动画领域大有可为。https://graphics-research-group.github.io/SketchAnim/。
{"title":"SketchAnim: Real-time sketch animation transfer from videos","authors":"Gaurav Rai,&nbsp;Shreyas Gupta,&nbsp;Ojaswa Sharma","doi":"10.1111/cgf.15176","DOIUrl":"https://doi.org/10.1111/cgf.15176","url":null,"abstract":"<p>Animation of hand-drawn sketches is an adorable art. It allows the animator to generate animations with expressive freedom and requires significant expertise. In this work, we introduce a novel sketch animation framework designed to address inherent challenges, such as motion extraction, motion transfer, and occlusion. The framework takes an exemplar video input featuring a moving object and utilizes a robust motion transfer technique to animate the input sketch. We show comparative evaluations that demonstrate the superior performance of our method over existing sketch animation techniques. Notably, our approach exhibits a higher level of user accessibility in contrast to conventional sketch-based animation systems, positioning it as a promising contributor to the field of sketch animation. https://graphics-research-group.github.io/SketchAnim/</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating a 3D Mesh in A-pose from a Single Image for Character Rigging 在 A-pose 中从单一图像创建 3D 网格,用于角色装配
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15177
Seunghwan Lee, C. Karen Liu

Learning-based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A-pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large-scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models.

基于学习的三维内容生成方法在根据文本提示、视频和图像创建三维角色方面显示出巨大的潜力。然而,目前的方法主要侧重于生成静态三维网格,忽略了创建可动画化三维网格这一关键环节。直接使用现有方法生成的三维网格来创建动画底层骨架会面临许多挑战,因为生成的网格可能会出现几何假象或任意姿势,从而使后续的装配过程复杂化。本作品提出了一种新的框架,用于从描绘角色的单张二维图像生成三维动画网格。我们的方法是强制生成的三维网格采用 A 姿态,这样可以减少几何假象,方便使用现有的自动装配方法。我们的方法旨在利用现有跨模态模型的生成能力,而无需新数据或大规模训练。我们通过定性结果、消融研究以及与现有三维网格生成模型的定量比较来评估我们框架的有效性。
{"title":"Creating a 3D Mesh in A-pose from a Single Image for Character Rigging","authors":"Seunghwan Lee,&nbsp;C. Karen Liu","doi":"10.1111/cgf.15177","DOIUrl":"https://doi.org/10.1111/cgf.15177","url":null,"abstract":"<p>Learning-based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A-pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large-scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Move Like Professional Counter-Strike Players 学习像职业反恐精英玩家那样移动
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-09 DOI: 10.1111/cgf.15173
D. Durst, F. Xie, V. Sarukkai, B. Shacklett, I. Frosio, C. Tessler, J. Kim, C. Taylor, G. Bernstein, S. Choudhury, P. Hanrahan, K. Fatahalian

In multiplayer, first-person shooter games like Counter-Strike: Global Offensive (CS:GO), coordinated movement is a critical component of high-level strategic play. However, the complexity of team coordination and the variety of conditions present in popular game maps make it impractical to author hand-crafted movement policies for every scenario. We show that it is possible to take a data-driven approach to creating human-like movement controllers for CS:GO. We curate a team movement dataset comprising 123 hours of professional game play traces, and use this dataset to train a transformer-based movement model that generates human-like team movement for all players in a “Retakes” round of the game. Importantly, the movement prediction model is efficient. Performing inference for all players takes less than 0.5 ms per game step (amortized cost) on a single CPU core, making it plausible for use in commercial games today. Human evaluators assess that our model behaves more like humans than both commercially-available bots and procedural movement controllers scripted by experts (16% to 59% higher by TrueSkill rating of “human-like”). Using experiments involving in-game bot vs. bot self-play, we demonstrate that our model performs simple forms of teamwork, makes fewer common movement mistakes, and yields movement distributions, player lifetimes, and kill locations similar to those observed in professional CS:GO match play.

在《反恐精英:全球攻势》(CS:GO)等多人第一人称射击游戏中,协调移动是高水平战略游戏的关键组成部分:在《反恐精英:全球攻势》(CS:GO)等多人第一人称射击游戏中,协调移动是高水平战略游戏的重要组成部分。然而,由于团队协调的复杂性和流行游戏地图中存在的各种情况,要针对每种情况制定手工制作的移动策略是不切实际的。我们的研究表明,采用数据驱动的方法为 CS:GO 创建类人动作控制器是可行的。我们策划了一个团队移动数据集,其中包括 123 个小时的职业比赛轨迹,并利用该数据集训练了一个基于变压器的移动模型,该模型可在游戏的 "重拍 "回合中为所有玩家生成类似人类的团队移动。重要的是,运动预测模型非常高效。在单个 CPU 内核上对所有球员进行推理,每个游戏步骤所需的时间不到 0.5 毫秒(摊销成本),因此可以在当今的商业游戏中使用。人类评估人员认为,与市面上的机器人和专家编写的程序化动作控制器相比,我们的模型表现得更像人类(根据 TrueSkill 的 "类人 "评级,高出 16% 至 59%)。通过游戏中机器人与机器人自我对战的实验,我们证明了我们的模型可以进行简单形式的团队合作,较少犯常见的移动错误,并产生与专业 CS:GO 比赛中观察到的类似的移动分布、玩家生命周期和击杀位置。
{"title":"Learning to Move Like Professional Counter-Strike Players","authors":"D. Durst,&nbsp;F. Xie,&nbsp;V. Sarukkai,&nbsp;B. Shacklett,&nbsp;I. Frosio,&nbsp;C. Tessler,&nbsp;J. Kim,&nbsp;C. Taylor,&nbsp;G. Bernstein,&nbsp;S. Choudhury,&nbsp;P. Hanrahan,&nbsp;K. Fatahalian","doi":"10.1111/cgf.15173","DOIUrl":"https://doi.org/10.1111/cgf.15173","url":null,"abstract":"<p>In multiplayer, first-person shooter games like Counter-Strike: Global Offensive (CS:GO), coordinated movement is a critical component of high-level strategic play. However, the complexity of team coordination and the variety of conditions present in popular game maps make it impractical to author hand-crafted movement policies for every scenario. We show that it is possible to take a data-driven approach to creating human-like movement controllers for CS:GO. We curate a team movement dataset comprising 123 hours of professional game play traces, and use this dataset to train a transformer-based movement model that generates human-like team movement for all players in a “Retakes” round of the game. Importantly, the movement prediction model is efficient. Performing inference for all players takes less than 0.5 ms per game step (amortized cost) on a single CPU core, making it plausible for use in commercial games today. Human evaluators assess that our model behaves more like humans than both commercially-available bots and procedural movement controllers scripted by experts (16% to 59% higher by TrueSkill rating of “human-like”). Using experiments involving in-game bot vs. bot self-play, we demonstrate that our model performs simple forms of teamwork, makes fewer common movement mistakes, and yields movement distributions, player lifetimes, and kill locations similar to those observed in professional CS:GO match play.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1