This paper aims to consistently blend different types of motions after establishing automatic correspondence between their salient features. Framespace interpolation is a consistent forward kinematics motion transition technique that uses weights from input spline curves to warp and blend motions. Its application has been limited to interactive interpolation of two or four cyclic motions. Though based on principles that minimize violations like sliding of supporting end-effectors, it does not guarantee slide-free motion. This paper extends the application of framespace interpolation to an unlimited chain of cyclic and acyclic motions, via an improved coordination warp and constraints on transition curves. Inverse kinematics has been seamlessly used to correct slide-violations on the fly. These extensions have opened up exciting possibilities in real-time cyclification, blending and concatenation of a wide variety of human motions.
{"title":"Constrained framespace interpolation","authors":"Golam Ashraf, Kok Cheong Wong","doi":"10.1109/CA.2001.982378","DOIUrl":"https://doi.org/10.1109/CA.2001.982378","url":null,"abstract":"This paper aims to consistently blend different types of motions after establishing automatic correspondence between their salient features. Framespace interpolation is a consistent forward kinematics motion transition technique that uses weights from input spline curves to warp and blend motions. Its application has been limited to interactive interpolation of two or four cyclic motions. Though based on principles that minimize violations like sliding of supporting end-effectors, it does not guarantee slide-free motion. This paper extends the application of framespace interpolation to an unlimited chain of cyclic and acyclic motions, via an improved coordination warp and constraints on transition curves. Inverse kinematics has been seamlessly used to correct slide-violations on the fly. These extensions have opened up exciting possibilities in real-time cyclification, blending and concatenation of a wide variety of human motions.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126332762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. D. Fiore, Philip Schaeken, Koen Elens, F. Reeth
This paper introduces a new method for automatic in-betweening in computer assisted traditional animation. The solution is based on novel 2.5D modelling and animation techniques within the context of a multi-level approach, starting with basic 2D drawing primitives (curves) at level 0, over explicit 2.5D modelling structures at level 1 and inclusion of 3D information by means of skeletons at level 2, to high-level deformation tools (and possibly other tools for supporting specific purposes such as facial expression) at level 3. The underlying methodologies are explained and implementation results are elucidated.
{"title":"Automatic in-betweening in computer assisted animation by exploiting 2.5D modelling techniques","authors":"F. D. Fiore, Philip Schaeken, Koen Elens, F. Reeth","doi":"10.1109/CA.2001.982393","DOIUrl":"https://doi.org/10.1109/CA.2001.982393","url":null,"abstract":"This paper introduces a new method for automatic in-betweening in computer assisted traditional animation. The solution is based on novel 2.5D modelling and animation techniques within the context of a multi-level approach, starting with basic 2D drawing primitives (curves) at level 0, over explicit 2.5D modelling structures at level 1 and inclusion of 3D information by means of skeletons at level 2, to high-level deformation tools (and possibly other tools for supporting specific purposes such as facial expression) at level 3. The underlying methodologies are explained and implementation results are elucidated.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Ashida, Seung-Joo Lee, J. Allbeck, H. C. Sun, N. Badler, Dimitris N. Metaxas
Creating a complex virtual environment with human inhabitants that behave as we would expect real humans to behave is a difficult and time consuming task. Time must be spent to construct the environment, to create human figures, to create animations for the agents' actions, and to create controls for the agents' behaviors, such as scripts, plans, and decision-makers. Often work done for one virtual environment must be completely replicated for another. The creation of robust, procedural actions that can be ported from one simulation to another would ease the creation of new virtual environments. As walking is useful in many different virtual environments, the creation of natural looking walking is important. In this paper we present a system for producing more natural looking walking by incorporating actions for the upper body. We aim to provide a tool that authors of virtual environments can use to add realism to their characters without effort.
{"title":"Pedestrians: creating agent behaviors through statistical analysis of observation data","authors":"K. Ashida, Seung-Joo Lee, J. Allbeck, H. C. Sun, N. Badler, Dimitris N. Metaxas","doi":"10.1109/CA.2001.982380","DOIUrl":"https://doi.org/10.1109/CA.2001.982380","url":null,"abstract":"Creating a complex virtual environment with human inhabitants that behave as we would expect real humans to behave is a difficult and time consuming task. Time must be spent to construct the environment, to create human figures, to create animations for the agents' actions, and to create controls for the agents' behaviors, such as scripts, plans, and decision-makers. Often work done for one virtual environment must be completely replicated for another. The creation of robust, procedural actions that can be ported from one simulation to another would ease the creation of new virtual environments. As walking is useful in many different virtual environments, the creation of natural looking walking is important. In this paper we present a system for producing more natural looking walking by incorporating actions for the upper body. We aim to provide a tool that authors of virtual environments can use to add realism to their characters without effort.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131492556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computer graphics L-systems represent a powerful rule-based language for modeling complex objects and their animation. However, designing objects and animations by rules is a difficult task because designers often cannot foresee the consequences of rules. This is especially true for non-experts in the domain. Therefore, we propose to enhance an L-system based animation system with evolutionary features based on genetic algorithms (GAs). These features support the designers' task of interactively modeling objects and animations. Starting from an initial population of L-system-defined objects, the computer proposes iteratively new populations based on fitness value that are determined by the designers' creative or functional criteria. Moreover, automatic optimization of L-system-defined objects/animations is possible if an appropriate fitness function can be found for a given problem. We present a concept to integrate optimization by genetic algorithms into an L-system based animation system. Typical examples, such as automatic function optimization and creative interactive design of objects, illustrate our work.
{"title":"Integration of optimization by genetic algorithms into an L-system-based animation system","authors":"H. Noser, P. Stucki, H. Walser","doi":"10.1109/CA.2001.982383","DOIUrl":"https://doi.org/10.1109/CA.2001.982383","url":null,"abstract":"In computer graphics L-systems represent a powerful rule-based language for modeling complex objects and their animation. However, designing objects and animations by rules is a difficult task because designers often cannot foresee the consequences of rules. This is especially true for non-experts in the domain. Therefore, we propose to enhance an L-system based animation system with evolutionary features based on genetic algorithms (GAs). These features support the designers' task of interactively modeling objects and animations. Starting from an initial population of L-system-defined objects, the computer proposes iteratively new populations based on fitness value that are determined by the designers' creative or functional criteria. Moreover, automatic optimization of L-system-defined objects/animations is possible if an appropriate fitness function can be found for a given problem. We present a concept to integrate optimization by genetic algorithms into an L-system based animation system. Typical examples, such as automatic function optimization and creative interactive design of objects, illustrate our work.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127240112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports a method for estimating human spine posture using the front and side views of a human body taken by a video camera. We present here a new 3D model to estimate the spine posture using connected vertebra spheres. In this model, each vertebra forming the spine is approximated as a sphere. The spine is approximated as a series of spheres connecting each other. Each sphere has the control points to represent the outer shape of the body. The spine posture is estimated by moving the sphere and the control points. The estimation is performed by calculating the matching ratio between a projected image of the model and an input image. X-ray CT slices are used for the construction of the model. We applied the proposed method to real human images. The experimental results show that our 3D model worked reasonably well for the estimation of human spine posture based on real human images.
{"title":"Human spine posture estimation from video images based on connected vertebra spheres model","authors":"D. Furukawa, K. Mori, Y. Suenaga","doi":"10.1109/CA.2001.982391","DOIUrl":"https://doi.org/10.1109/CA.2001.982391","url":null,"abstract":"This paper reports a method for estimating human spine posture using the front and side views of a human body taken by a video camera. We present here a new 3D model to estimate the spine posture using connected vertebra spheres. In this model, each vertebra forming the spine is approximated as a sphere. The spine is approximated as a series of spheres connecting each other. Each sphere has the control points to represent the outer shape of the body. The spine posture is estimated by moving the sphere and the control points. The estimation is performed by calculating the matching ratio between a projected image of the model and an input image. X-ray CT slices are used for the construction of the model. We applied the proposed method to real human images. The experimental results show that our 3D model worked reasonably well for the estimation of human spine posture based on real human images.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128678232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel technique for real-time cloth simulation. The method combines dynamic simulation and geometric techniques. Only a small number of particles (a few hundred at maximum) are controlled using dynamic simulation to simulate global cloth behaviors such as waving and bending. The cloth surface is then smoothed based on the elastic forces applied to each particle and the distance between each pair of adjacent particles. Using this geometric smoothing, local cloth behaviors such as twists and wrinkles are efficiently simulated. The proposed method is very simple, and is easy to implement and integrate with existing particle-based systems. We also describe a particle-based simulation system for efficient simulation with sparse particles. The proposed method has animated a skirt with rich details in real-time.
{"title":"Real-time cloth simulation with sparse particles and curved faces","authors":"Masaki Oshita, A. Makinouchi","doi":"10.1109/CA.2001.982396","DOIUrl":"https://doi.org/10.1109/CA.2001.982396","url":null,"abstract":"In this paper, we present a novel technique for real-time cloth simulation. The method combines dynamic simulation and geometric techniques. Only a small number of particles (a few hundred at maximum) are controlled using dynamic simulation to simulate global cloth behaviors such as waving and bending. The cloth surface is then smoothed based on the elastic forces applied to each particle and the distance between each pair of adjacent particles. Using this geometric smoothing, local cloth behaviors such as twists and wrinkles are efficiently simulated. The proposed method is very simple, and is easy to implement and integrate with existing particle-based systems. We also describe a particle-based simulation system for efficient simulation with sparse particles. The proposed method has animated a skirt with rich details in real-time.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129291213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a new technique that provides a unified framework for modeling the collision dynamics of objects having arbitrary shape and rugosity. The technique assumes that boundary of the object is expressed by elementary segments with some random distribution of perturbation. Collision detection and treatment is done considering this conceptual representation. In order to efficiently detect collisions between irregular objects, we introduce the augmented oriented bounded box tree.
{"title":"Animation based in dynamic simulation involving irregular objects with non-homogeneous rugosities","authors":"Luís A. Rivera, P. Carvalho, L. Velho","doi":"10.1109/CA.2001.982386","DOIUrl":"https://doi.org/10.1109/CA.2001.982386","url":null,"abstract":"We propose a new technique that provides a unified framework for modeling the collision dynamics of objects having arbitrary shape and rugosity. The technique assumes that boundary of the object is expressed by elementary segments with some random distribution of perturbation. Collision detection and treatment is done considering this conceptual representation. In order to efficiently detect collisions between irregular objects, we introduce the augmented oriented bounded box tree.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122196995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tadasuke Tsumura, Takeharu Yoshizuka, Takashi Nojirino, T. Noma
This paper presents a new locomotion engine called T4. T4 is designed to cover walking motions on a horizontal floor, including straight/curved locomotion and turning-around, with captured motion data. It combines a step planner which determines parameters for the next step, and a database of captured step motion data. Captured step data are managed as 6D Delaunay triangles, and valid blended step motion data are obtained from the requested step parameters. The above mechanism realizes: (1) validity of blended data, (2) goal-directed property, (3) responsiveness, (4) realtime performance, and (5) reality from motion capture.
{"title":"T4: a motion-capture-based goal-directed real-time responsive locomotion engine","authors":"Tadasuke Tsumura, Takeharu Yoshizuka, Takashi Nojirino, T. Noma","doi":"10.1109/CA.2001.982377","DOIUrl":"https://doi.org/10.1109/CA.2001.982377","url":null,"abstract":"This paper presents a new locomotion engine called T4. T4 is designed to cover walking motions on a horizontal floor, including straight/curved locomotion and turning-around, with captured motion data. It combines a step planner which determines parameters for the next step, and a database of captured step motion data. Captured step data are managed as 6D Delaunay triangles, and valid blended step motion data are obtained from the requested step parameters. The above mechanism realizes: (1) validity of blended data, (2) goal-directed property, (3) responsiveness, (4) realtime performance, and (5) reality from motion capture.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"28 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128555019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although virtual humans have been an active research topic for many years, most research has focused on simulating various aspects of a humanoid instead of a crowd of people. We report our progress on generating collision free gross motions for multiple virtual crowds. Each virtual crowd consists of a leader and many followers. The leader is in charge of generating motions for its own group with the motions of other crowds taken into account. The followers use artificial life principles to follow the leader as it moves towards the goal. The high degrees of freedom involved in the crowd simulation system present a great challenge for gross motion planning. We adopt a decoupled path-planning approach, in which the paths being executed by other leaders become the motion constraint of the current leader under consideration. In addition, we take a unified view of search space to extend the planner to consider a whole crowd instead of just its leader. Experimental results show that our planner can efficiently generate satisfactory motions. We believe that such a planning system is a good addition to controlling groups of avatars in a 3D virtual environment.
{"title":"Simulating virtual human crowds with a leader-follower model","authors":"Tsai-Yen Li, Ying-Jiun Jeng, Shi-Ing Chang","doi":"10.1109/CA.2001.982381","DOIUrl":"https://doi.org/10.1109/CA.2001.982381","url":null,"abstract":"Although virtual humans have been an active research topic for many years, most research has focused on simulating various aspects of a humanoid instead of a crowd of people. We report our progress on generating collision free gross motions for multiple virtual crowds. Each virtual crowd consists of a leader and many followers. The leader is in charge of generating motions for its own group with the motions of other crowds taken into account. The followers use artificial life principles to follow the leader as it moves towards the goal. The high degrees of freedom involved in the crowd simulation system present a great challenge for gross motion planning. We adopt a decoupled path-planning approach, in which the paths being executed by other leaders become the motion constraint of the current leader under consideration. In addition, we take a unified view of search space to extend the planner to consider a whole crowd instead of just its leader. Experimental results show that our planner can efficiently generate satisfactory motions. We believe that such a planning system is a good addition to controlling groups of avatars in a 3D virtual environment.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"213 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our approach integrates three mechanisms needed to model and animate deformable objects: controlling mechanisms, geometric surface deformation, and mesh refinement. Most approaches focus either modeling the physically correct behavior or alternative representations for deformable models. This results in a set of method specific algorithms that best represents a particular class of deformable objects. By encapsulating each process, our system introduces the interface that allows one to integrate existing controllers in a modular fashion. We demonstrate this in our system by instantiating a hardware accelerated free form deformation for geometric deformation and midpoint subdivision for mesh refinement. Finally, we discuss the available options for the controlling mechanisms and show how this approach leads to a generalizable framework.
{"title":"A layered approach to deformable modeling and animation","authors":"C. Chua, U. Neumann","doi":"10.1109/CA.2001.982392","DOIUrl":"https://doi.org/10.1109/CA.2001.982392","url":null,"abstract":"Our approach integrates three mechanisms needed to model and animate deformable objects: controlling mechanisms, geometric surface deformation, and mesh refinement. Most approaches focus either modeling the physically correct behavior or alternative representations for deformable models. This results in a set of method specific algorithms that best represents a particular class of deformable objects. By encapsulating each process, our system introduces the interface that allows one to integrate existing controllers in a modular fashion. We demonstrate this in our system by instantiating a hardware accelerated free form deformation for geometric deformation and midpoint subdivision for mesh refinement. Finally, we discuss the available options for the controlling mechanisms and show how this approach leads to a generalizable framework.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}