首页 > 最新文献

Proceedings Computer Animation 1999最新文献

英文 中文
High level specification and control of communication gestures: the GESSYCA system 高层次的规范和控制通信手势:GESSYCA系统
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781196
Thierry Lebourque, S. Gibet
This paper describes a complete system for the specification and the generation of communication gestures. A high level language for the specification of hand-arm communication gestures has been developed. This language is based both on a discrete description of space, and on a movement decomposition inspired from sign language gestures. Communication gestures are represented through symbolic commands which can be described by qualitative data, and traduced in terms of spatiotemporal targets driving a generation system. Such an approach is possible for the class of generation models controlled through key-points information. The generation model used in our approach is composed of a set of sensory-motor servo-loops. Each of these models resolves in real time the inversion of the servo-loop, from the direct specification of location targets, while satisfying psycho-motor laws of biological movement. The whole control system is applied to the synthesis, and a validation of the synthesized movements is presented.
本文描述了一个完整的通信手势的规范和生成系统。一个高层次的语言规范的手臂通信手势已开发。这种语言既基于对空间的离散描述,也基于受手势启发的运动分解。通信手势是通过符号命令来表示的,这些符号命令可以用定性数据来描述,并根据驱动生成系统的时空目标进行诋毁。这种方法对于通过关键点信息控制的生成模型类是可能的。在我们的方法中使用的生成模型是由一组感觉-运动伺服回路组成的。在满足生物运动的心理-运动规律的前提下,从直接定位目标出发,实时解决伺服回路的反演问题。将整个控制系统应用于合成,并对合成的运动进行了验证。
{"title":"High level specification and control of communication gestures: the GESSYCA system","authors":"Thierry Lebourque, S. Gibet","doi":"10.1109/CA.1999.781196","DOIUrl":"https://doi.org/10.1109/CA.1999.781196","url":null,"abstract":"This paper describes a complete system for the specification and the generation of communication gestures. A high level language for the specification of hand-arm communication gestures has been developed. This language is based both on a discrete description of space, and on a movement decomposition inspired from sign language gestures. Communication gestures are represented through symbolic commands which can be described by qualitative data, and traduced in terms of spatiotemporal targets driving a generation system. Such an approach is possible for the class of generation models controlled through key-points information. The generation model used in our approach is composed of a set of sensory-motor servo-loops. Each of these models resolves in real time the inversion of the servo-loop, from the direct specification of location targets, while satisfying psycho-motor laws of biological movement. The whole control system is applied to the synthesis, and a validation of the synthesized movements is presented.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
MPEG-4 compatible faces from orthogonal photos MPEG-4兼容的面孔从正交照片
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781211
Won-Sook Lee, M. Escher, Gaël Sannier, N. Magnenat-Thalmann
MPEG-4 is scheduled to become an international standard in March 1999. The paper demonstrates an experiment for a virtual cloning method and animation system, which is compatible with the MPEG-4 standard facial object specification. Our method uses orthogonal photos (front and side view) as input and reconstructs the 3D facial model. The method is based on extracting MPEG-4 face definition parameters (FDP) from photos, which initializes a custom face in a more capable interface, and deforming a generic model. Texture mapping is employed using an image composed of the two orthogonal images, which is done completely automatically. A reconstructed head can be animated immediately inside our animation system, which is adapted to the MPEG-4 standard specification of face animation parameters (FAP). The result is integrated into our virtual human director (VHD) system.
MPEG-4计划在1999年3月成为国际标准。本文演示了一种与MPEG-4标准人脸对象规范兼容的虚拟克隆方法和动画系统的实验。我们的方法使用正交照片(正面和侧面)作为输入,重建三维面部模型。该方法基于从照片中提取MPEG-4人脸定义参数(FDP),在功能更强大的界面中初始化自定义人脸,并对通用模型进行变形。纹理映射是由两个正交图像组成的图像,完全自动完成。该动画系统适应MPEG-4标准的人脸动画参数(FAP)规范,重构后的头部可以在动画系统内立即动画。结果被集成到我们的虚拟人体导演(VHD)系统中。
{"title":"MPEG-4 compatible faces from orthogonal photos","authors":"Won-Sook Lee, M. Escher, Gaël Sannier, N. Magnenat-Thalmann","doi":"10.1109/CA.1999.781211","DOIUrl":"https://doi.org/10.1109/CA.1999.781211","url":null,"abstract":"MPEG-4 is scheduled to become an international standard in March 1999. The paper demonstrates an experiment for a virtual cloning method and animation system, which is compatible with the MPEG-4 standard facial object specification. Our method uses orthogonal photos (front and side view) as input and reconstructs the 3D facial model. The method is based on extracting MPEG-4 face definition parameters (FDP) from photos, which initializes a custom face in a more capable interface, and deforming a generic model. Texture mapping is employed using an image composed of the two orthogonal images, which is done completely automatically. A reconstructed head can be animated immediately inside our animation system, which is adapted to the MPEG-4 standard specification of face animation parameters (FAP). The result is integrated into our virtual human director (VHD) system.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"458 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123876263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Alhambra: a system for producing 2D animation Alhambra:制作2D动画的系统
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781197
Domingo Martín, J. Torres
There is a great interest in producing computer animation that looks like 2D classic animation. The flat shading, silhouettes and inside contour lines are all visual characteristics that, joined to flexible expressiveness, constitute the basic elements of 2D animation. We have developed methods for obtaining the silhouettes and interior curves from polygonal models. Virtual lights is a new method for modeling the visualization of inside curves. The need for flexibility of the model is achieved by the use of hierarchical nonlinear transformations.
人们对制作看起来像2D经典动画的计算机动画非常感兴趣。平面的阴影、剪影和内轮廓线都是视觉特征,加上灵活的表现力,构成了2D动画的基本元素。我们已经开发了从多边形模型中获得轮廓和内部曲线的方法。虚拟光是一种新的内部曲线可视化建模方法。通过使用分层非线性变换来实现对模型灵活性的需求。
{"title":"Alhambra: a system for producing 2D animation","authors":"Domingo Martín, J. Torres","doi":"10.1109/CA.1999.781197","DOIUrl":"https://doi.org/10.1109/CA.1999.781197","url":null,"abstract":"There is a great interest in producing computer animation that looks like 2D classic animation. The flat shading, silhouettes and inside contour lines are all visual characteristics that, joined to flexible expressiveness, constitute the basic elements of 2D animation. We have developed methods for obtaining the silhouettes and interior curves from polygonal models. Virtual lights is a new method for modeling the visualization of inside curves. The need for flexibility of the model is achieved by the use of hierarchical nonlinear transformations.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131932938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Emotionally expressive agents 情感表达主体
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781198
M. S. El-Nasr, T. Ioerger, J. Yen, D. House, F. Parke
The ability to express emotions is important for creating believable interactive characters. To simulate emotional expressions in an interactive environment, an intelligent agent needs both an adaptive model for generating believable responses, and a visualization model for mapping emotions into facial expressions. Recent advances in intelligent agents and in facial modeling have produced effective algorithms for these tasks independently. We describe a method for integrating these algorithms to create an interactive simulation of an agent that produces appropriate facial expressions in a dynamic environment. Our approach to combining a model of emotions with a facial model represents a first step towards developing the technology of a truly believable interactive agent which has a wide range of applications from designing intelligent training systems to video games and animation tools.
表达情感的能力对于创造可信的互动角色非常重要。为了在交互环境中模拟情绪表达,智能代理需要一个自适应模型来生成可信的反应,以及一个可视化模型来将情绪映射到面部表情中。智能代理和面部建模的最新进展已经为这些任务产生了有效的算法。我们描述了一种集成这些算法的方法,以创建在动态环境中产生适当面部表情的代理的交互式模拟。我们将情感模型与面部模型相结合的方法代表了开发真正可信的交互式代理技术的第一步,该技术具有广泛的应用,从设计智能训练系统到视频游戏和动画工具。
{"title":"Emotionally expressive agents","authors":"M. S. El-Nasr, T. Ioerger, J. Yen, D. House, F. Parke","doi":"10.1109/CA.1999.781198","DOIUrl":"https://doi.org/10.1109/CA.1999.781198","url":null,"abstract":"The ability to express emotions is important for creating believable interactive characters. To simulate emotional expressions in an interactive environment, an intelligent agent needs both an adaptive model for generating believable responses, and a visualization model for mapping emotions into facial expressions. Recent advances in intelligent agents and in facial modeling have produced effective algorithms for these tasks independently. We describe a method for integrating these algorithms to create an interactive simulation of an agent that produces appropriate facial expressions in a dynamic environment. Our approach to combining a model of emotions with a facial model represents a first step towards developing the technology of a truly believable interactive agent which has a wide range of applications from designing intelligent training systems to video games and animation tools.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114941785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
Visible volume buffer for efficient hair expression and shadow generation 可见的体积缓冲有效的头发表达和阴影的产生
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781199
Waiming Kong, M. Nakajima
Much research has been conducted on hair modeling and hair rendering with considerable success. However, the immense number of hair strands means that memory and CPU time requirements are very severe. To reduce the memory and the time needed for hair modeling and rendering, a visible volume buffer is proposed. Instead of using thousands of thin hairs, the memory usage and hair modeling time can be reduced by using coarse background hairs and fine surface hairs. The background hairs can be constructed by using thick hairs. To improve the look of the hair model, the background hair near the surface is broken down into numerous thin hairs and rendered. The visible volume buffer is used to determine the surface hairs. The rendering time of the background and surface hairs is found to be faster than the conventional hair model by a factor of more than four with little lost in image quality. The visible volume buffer is also used to produce shadows for the hair model.
在头发建模和头发渲染方面进行了大量的研究,并取得了相当大的成功。然而,大量的发丝意味着对内存和CPU时间的要求非常高。为了减少毛发建模和渲染所需的内存和时间,提出了一种可视体积缓冲区。通过使用粗背景毛和细表面毛,可以减少内存使用和头发建模时间,而不是使用成千上万的细头发。背景毛可以用粗毛构造。为了改善头发模型的外观,表面附近的背景头发被分解成许多细头发并渲染。可见体积缓冲器用于确定表面毛。背景和表面毛发的渲染时间比传统毛发模型快了四倍以上,图像质量几乎没有损失。可见的体积缓冲也用于产生头发模型的阴影。
{"title":"Visible volume buffer for efficient hair expression and shadow generation","authors":"Waiming Kong, M. Nakajima","doi":"10.1109/CA.1999.781199","DOIUrl":"https://doi.org/10.1109/CA.1999.781199","url":null,"abstract":"Much research has been conducted on hair modeling and hair rendering with considerable success. However, the immense number of hair strands means that memory and CPU time requirements are very severe. To reduce the memory and the time needed for hair modeling and rendering, a visible volume buffer is proposed. Instead of using thousands of thin hairs, the memory usage and hair modeling time can be reduced by using coarse background hairs and fine surface hairs. The background hairs can be constructed by using thick hairs. To improve the look of the hair model, the background hair near the surface is broken down into numerous thin hairs and rendered. The visible volume buffer is used to determine the surface hairs. The rendering time of the background and surface hairs is found to be faster than the conventional hair model by a factor of more than four with little lost in image quality. The visible volume buffer is also used to produce shadows for the hair model.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130247475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Virtual human animation based on movement observation and cognitive behavior models 基于运动观察和认知行为模型的虚拟人体动画
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781206
N. Badler, D. Chi, Sonu Chopra-Khullar
Automatically animating virtual humans with actions that reflect real human motions is still a challenge. We present a framework for animation that is based on utilizing empirical and validated data from movement observation and cognitive psychology. To illustrate these, we demonstrate a mapping from effort motion factors onto expressive arm movements, and from cognitive data to autonomous attention behaviors. We conclude with a discussion on the implications of this approach for the future of real time virtual human animation.
让虚拟人的动作自动反映真实的人类运动仍然是一个挑战。我们提出了一个动画框架,该框架基于利用来自运动观察和认知心理学的经验和验证数据。为了说明这些,我们展示了从努力运动因素到表达性手臂运动,以及从认知数据到自主注意行为的映射。最后,我们讨论了这种方法对未来实时虚拟人体动画的影响。
{"title":"Virtual human animation based on movement observation and cognitive behavior models","authors":"N. Badler, D. Chi, Sonu Chopra-Khullar","doi":"10.1109/CA.1999.781206","DOIUrl":"https://doi.org/10.1109/CA.1999.781206","url":null,"abstract":"Automatically animating virtual humans with actions that reflect real human motions is still a challenge. We present a framework for animation that is based on utilizing empirical and validated data from movement observation and cognitive psychology. To illustrate these, we demonstrate a mapping from effort motion factors onto expressive arm movements, and from cognitive data to autonomous attention behaviors. We conclude with a discussion on the implications of this approach for the future of real time virtual human animation.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114200299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
A software system to carry-out virtual experiments on human motion 对人体运动进行虚拟实验的软件系统
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781195
F. Multon, J. Nougaret, G. Hégron, Luc Millet, B. Arnaldi
This work presents a simulation system designed to carry-out virtual experiments on human motion. 3D visualization, automatic code generation and generic control design patterns provide biomechanicians and medics with dynamic simulation tools. The paper first deals with the design of mechanical models of human beings. It also presents design patterns of controllers for an upper-limb model composed of 11 degrees of freedom. As an example, two controllers are presented in order to illustrate these design patterns. The paper also presents a user-friendly interface dedicated to medics that makes it possible to enter orders in natural language.
本文提出了一种用于人体运动虚拟实验的仿真系统。3D可视化、自动代码生成和通用控制设计模式为生物力学家和医生提供了动态仿真工具。本文首先论述了人体机械模型的设计。给出了由11个自由度组成的上肢模型的控制器设计模式。为了说明这些设计模式,本文给出了两个控制器作为示例。本文还提出了一个专用于医务人员的用户友好界面,使其能够以自然语言输入命令。
{"title":"A software system to carry-out virtual experiments on human motion","authors":"F. Multon, J. Nougaret, G. Hégron, Luc Millet, B. Arnaldi","doi":"10.1109/CA.1999.781195","DOIUrl":"https://doi.org/10.1109/CA.1999.781195","url":null,"abstract":"This work presents a simulation system designed to carry-out virtual experiments on human motion. 3D visualization, automatic code generation and generic control design patterns provide biomechanicians and medics with dynamic simulation tools. The paper first deals with the design of mechanical models of human beings. It also presents design patterns of controllers for an upper-limb model composed of 11 degrees of freedom. As an example, two controllers are presented in order to illustrate these design patterns. The paper also presents a user-friendly interface dedicated to medics that makes it possible to enter orders in natural language.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124153769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Animation of human walking in virtual environments 人类在虚拟环境中行走的动画
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781194
Shih-kai Chung, J. Hahn
This paper presents an interactive hierarchical motion control system dedicated to the animation of human figure locomotion in virtual environments. As observed in gait experiments, controlling the trajectories of the feet during gait is a precise end-point control task. Inverse kinematics with optimal approaches are used to control the complex relationships between the motion of the body and the coordination of its legs. For each step, the simulation of the support leg is executed first, followed by the swing leg, which incorporates the position of the pelvis from the support leg. That is, the foot placement of the support leg serves as the kinematics constraint while the position of the pelvis is defined through the evaluation of a control criteria optimization. Then, the swing leg movement is defined to satisfy two criteria in order: collision avoidance and control criteria optimization. Finally, animation attributes, such as controlling parameters and pre-processed motion modules, are applied to achieve a variety of personalities and walking styles.
本文提出了一种用于虚拟环境中人体运动动画的交互式分层运动控制系统。步态实验表明,步态过程中足部运动轨迹的控制是一项精确的终点控制任务。采用最优解的逆运动学方法来控制人体运动与腿部协调之间的复杂关系。对于每一步,首先执行支撑腿的模拟,然后是摆动腿,其中包含了骨盆从支撑腿的位置。也就是说,支撑腿的足部位置作为运动学约束,而骨盆的位置是通过评估控制标准优化来定义的。然后,定义摆腿运动,依次满足避碰和控制优化两个准则。最后,运用动画属性,如控制参数和预处理运动模块,实现各种个性和行走方式。
{"title":"Animation of human walking in virtual environments","authors":"Shih-kai Chung, J. Hahn","doi":"10.1109/CA.1999.781194","DOIUrl":"https://doi.org/10.1109/CA.1999.781194","url":null,"abstract":"This paper presents an interactive hierarchical motion control system dedicated to the animation of human figure locomotion in virtual environments. As observed in gait experiments, controlling the trajectories of the feet during gait is a precise end-point control task. Inverse kinematics with optimal approaches are used to control the complex relationships between the motion of the body and the coordination of its legs. For each step, the simulation of the support leg is executed first, followed by the swing leg, which incorporates the position of the pelvis from the support leg. That is, the foot placement of the support leg serves as the kinematics constraint while the position of the pelvis is defined through the evaluation of a control criteria optimization. Then, the swing leg movement is defined to satisfy two criteria in order: collision avoidance and control criteria optimization. Finally, animation attributes, such as controlling parameters and pre-processed motion modules, are applied to achieve a variety of personalities and walking styles.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Real-time collision detection for virtual surgery 虚拟手术的实时碰撞检测
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781201
J. Lombardo, Marie-Paule Cani, Fabrice Neyret
We present a simple method for performing real-time collision detection in a virtual surgery environment. The method relies on the graphics hardware for testing the interpenetration between a virtual deformable organ and a rigid tool controlled by the user. The method enables to take into account the motion of the tool between two consecutive time steps. For our specific application, the new method runs about a hundred times faster than the well known oriented-bonding-boxes tree method.
我们提出了一种在虚拟手术环境中进行实时碰撞检测的简单方法。该方法依靠图形硬件来测试虚拟可变形器官与用户控制的刚性工具之间的互穿性。该方法能够考虑刀具在两个连续时间步长之间的运动。对于我们的具体应用,新方法的运行速度比众所周知的定向绑定盒树方法快100倍。
{"title":"Real-time collision detection for virtual surgery","authors":"J. Lombardo, Marie-Paule Cani, Fabrice Neyret","doi":"10.1109/CA.1999.781201","DOIUrl":"https://doi.org/10.1109/CA.1999.781201","url":null,"abstract":"We present a simple method for performing real-time collision detection in a virtual surgery environment. The method relies on the graphics hardware for testing the interpenetration between a virtual deformable organ and a rigid tool controlled by the user. The method enables to take into account the motion of the tool between two consecutive time steps. For our specific application, the new method runs about a hundred times faster than the well known oriented-bonding-boxes tree method.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134488541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Recursive dynamics and optimal control techniques for human motion planning 人体运动规划的递归动力学与最优控制技术
Pub Date : 1999-05-26 DOI: 10.1109/CA.1999.781215
Janzen Lo, Dimitris N. Metaxas
We present an efficient optimal control based approach to simulate dynamically correct human movements. We model virtual humans as a kinematic chain consisting of serial, closed loop, and tree-structures. To overcome the complexity limitations of the classical Lagrangian formulation and to include knowledge from biomechanical studies, we have developed a minimum-torque motion planning method. This new method is based on the use of optimal control theory within a recursive dynamics framework. Our dynamic motion planning methodology achieves high efficiency regardless of the figure topology. As opposed to a Lagrangian formulation, it obviates the need for the reformulation of the dynamic equations for different structured articulated figures. We then use a quasi-Newton method based nonlinear programming technique to solve our minimal torque-based human motion planning problem. This method achieves superlinear convergence. We use the screw theoretical method to compute analytically the necessary gradient of the motion and force. This provides a better conditioned optimization computation and allows the robust and efficient implementation of our method. Cubic spline functions have been used to make the search space for an optimal solution finite. We demonstrate the efficacy of our proposed method based on a variety of human motion tasks involving open and closed loop kinematic chains. Our models are built using parameters chosen from an anthropomorphic database. The results demonstrate that our approach generates natural looking and physically correct human motions.
我们提出了一种有效的基于最优控制的方法来模拟动态正确的人体运动。我们将虚拟人建模为一个由序列、闭环和树结构组成的运动链。为了克服经典拉格朗日公式的复杂性限制,并纳入生物力学研究的知识,我们开发了一种最小扭矩运动规划方法。该方法基于递归动力学框架下最优控制理论的应用。我们的动态运动规划方法无论图形拓扑如何都能实现高效率。与拉格朗日公式相反,它避免了对不同结构铰接图形的动态方程的重新表述。然后,我们使用基于准牛顿方法的非线性规划技术来解决基于最小扭矩的人体运动规划问题。该方法实现了超线性收敛。我们用螺旋理论方法解析计算了运动和力的必要梯度。这提供了一个更好的条件优化计算,并允许我们的方法鲁棒和有效的实现。利用三次样条函数使最优解的搜索空间有限。我们基于涉及开环和闭环运动链的各种人体运动任务证明了我们提出的方法的有效性。我们的模型是使用从拟人数据库中选择的参数构建的。结果表明,我们的方法产生自然的外观和物理正确的人体运动。
{"title":"Recursive dynamics and optimal control techniques for human motion planning","authors":"Janzen Lo, Dimitris N. Metaxas","doi":"10.1109/CA.1999.781215","DOIUrl":"https://doi.org/10.1109/CA.1999.781215","url":null,"abstract":"We present an efficient optimal control based approach to simulate dynamically correct human movements. We model virtual humans as a kinematic chain consisting of serial, closed loop, and tree-structures. To overcome the complexity limitations of the classical Lagrangian formulation and to include knowledge from biomechanical studies, we have developed a minimum-torque motion planning method. This new method is based on the use of optimal control theory within a recursive dynamics framework. Our dynamic motion planning methodology achieves high efficiency regardless of the figure topology. As opposed to a Lagrangian formulation, it obviates the need for the reformulation of the dynamic equations for different structured articulated figures. We then use a quasi-Newton method based nonlinear programming technique to solve our minimal torque-based human motion planning problem. This method achieves superlinear convergence. We use the screw theoretical method to compute analytically the necessary gradient of the motion and force. This provides a better conditioned optimization computation and allows the robust and efficient implementation of our method. Cubic spline functions have been used to make the search space for an optimal solution finite. We demonstrate the efficacy of our proposed method based on a variety of human motion tasks involving open and closed loop kinematic chains. Our models are built using parameters chosen from an anthropomorphic database. The results demonstrate that our approach generates natural looking and physically correct human motions.","PeriodicalId":108994,"journal":{"name":"Proceedings Computer Animation 1999","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128022601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
期刊
Proceedings Computer Animation 1999
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1