The paper presents a real-time physically based simulation of object damage and motion due to a blast wave impact. An improved connected voxel model is used to represent the objects. The paper also explores auxiliary visual effects caused by the blast wave that increase visual believability without being rigorously physically based or computationally expensive.
{"title":"Visually believable explosions in real time","authors":"Claude Martins, J. W. Buchanan, J. Amanatides","doi":"10.1109/CA.2001.982398","DOIUrl":"https://doi.org/10.1109/CA.2001.982398","url":null,"abstract":"The paper presents a real-time physically based simulation of object damage and motion due to a blast wave impact. An improved connected voxel model is used to represent the objects. The paper also explores auxiliary visual effects caused by the blast wave that increase visual believability without being rigorously physically based or computationally expensive.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we extend our previous work (Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000) and propose a muscle model that is suitable for computer graphics based on physiological and anatomical considerations. Muscle motion and deformation is automatically derived from one or several action lines, each action line being deformed by a 1D mass-spring system. The resulting model is fast, can accommodate most superficial human muscles, and could easily be integrated into current modeling packages. Example animations can be found at .
在本文中,我们扩展了我们之前的工作(Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000),并提出了一个基于生理和解剖学考虑的适合计算机图形学的肌肉模型。肌肉运动和变形是由一条或几条动作线自动产生的,每条动作线由一维质量弹簧系统变形。生成的模型速度快,可以适应大多数浅层人体肌肉,并且可以很容易地集成到当前的建模软件包中。示例动画可以在。
{"title":"Interactive modeling of the human musculature","authors":"Amaury Aubel, D. Thalmann","doi":"10.1109/CA.2001.982390","DOIUrl":"https://doi.org/10.1109/CA.2001.982390","url":null,"abstract":"In this paper, we extend our previous work (Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000) and propose a muscle model that is suitable for computer graphics based on physiological and anatomical considerations. Muscle motion and deformation is automatically derived from one or several action lines, each action line being deformed by a 1D mass-spring system. The resulting model is fast, can accommodate most superficial human muscles, and could easily be integrated into current modeling packages. Example animations can be found at .","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.
我们提出了一种性能驱动的面部动画系统,用于分析捕获的表情以发现肌肉驱动并将表情与驱动值合成。我们工作的一个明显不同的方法是,我们让艺术家雕刻驱动基础的初稿:与单个肌肉的孤立驱动相对应的基本面部形状,而不是完全计算皮肤表面变形,依靠有限元方法等数学模型。我们通过基元的线性组合来合成表达式,并通过寻找组合的权值来分析表达式。尽管手工生成的驱动基代表了主体特征表达式的本质,但它不够精确,无法用于后续的计算过程。我们还描述了一种迭代算法来提高驱动基的精度。实验结果表明,我们的艺术家-in- The -loop方法比纯数学模型产生更可预测和可控的结果,因此可以成为动画制作中非常有用的工具。
{"title":"Analysis and synthesis of facial expressions with hand-generated muscle actuation basis","authors":"Byoungwon Choe, Hyeongseok Ko","doi":"10.1109/CA.2001.982372","DOIUrl":"https://doi.org/10.1109/CA.2001.982372","url":null,"abstract":"We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Presents an interface between a deformable body mechanics model and a rigid body mechanics model. What is novel with our approach is that the physical representation in both the models is the same, which ensures behavioral correctness and allows great flexibility. We use a mass-spring representation extended with the concept of volume, and thus contact and collision. All physical interaction occurs between the mass elements only, and thus there is no need for explicit handling of rigid-deformable or rigid-rigid body interaction. This also means that bodies can be partially rigid and partially deformable. It is also possible to change whether part of a body should be rigid or not dynamically. We present a demonstration example, and also possible applications in conceptual design engineering, geometric modeling, as well as computer animation.
{"title":"Merging deformable and rigid body mechanics simulation","authors":"J. Jansson, J. Vergeest, G. Kuczogi, I. Horváth","doi":"10.1109/CA.2001.982388","DOIUrl":"https://doi.org/10.1109/CA.2001.982388","url":null,"abstract":"Presents an interface between a deformable body mechanics model and a rigid body mechanics model. What is novel with our approach is that the physical representation in both the models is the same, which ensures behavioral correctness and allows great flexibility. We use a mass-spring representation extended with the concept of volume, and thus contact and collision. All physical interaction occurs between the mass elements only, and thus there is no need for explicit handling of rigid-deformable or rigid-rigid body interaction. This also means that bodies can be partially rigid and partially deformable. It is also possible to change whether part of a body should be rigid or not dynamically. We present a demonstration example, and also possible applications in conceptual design engineering, geometric modeling, as well as computer animation.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a method for implementing AI-based animation of artificial actors in the context of interactive storytelling. We have developed a fully implemented prototype based on the Unreal/sup TM/ game engine and carried out experiments with a simple sitcom-like scenario. We discuss the central role of artificial actors in interactive storytelling and how real-time generation of their behaviour participates to the creation of a dynamic storyline. We follow previous work describing the behaviour of artificial actors through AI planning formalisms, and adapt it to the context of narrative representation. The set of all possible behaviours, accounting for various instantiations of a basic plot, can be represented through an AND/OR graph. A real-time variant of the AO* algorithm can be used to interleave planning and action, thus allowing characters to interact between themselves and with the user. Finally, we present several examples of short plots and situations generated by the system from the dynamic interaction of artificial actors.
{"title":"AI-based animation for interactive storytelling","authors":"M. Cavazza, Fred Charles, Steven J. Mead","doi":"10.1109/CA.2001.982384","DOIUrl":"https://doi.org/10.1109/CA.2001.982384","url":null,"abstract":"In this paper, we describe a method for implementing AI-based animation of artificial actors in the context of interactive storytelling. We have developed a fully implemented prototype based on the Unreal/sup TM/ game engine and carried out experiments with a simple sitcom-like scenario. We discuss the central role of artificial actors in interactive storytelling and how real-time generation of their behaviour participates to the creation of a dynamic storyline. We follow previous work describing the behaviour of artificial actors through AI planning formalisms, and adapt it to the context of narrative representation. The set of all possible behaviours, accounting for various instantiations of a basic plot, can be represented through an AND/OR graph. A real-time variant of the AO* algorithm can be used to interleave planning and action, thus allowing characters to interact between themselves and with the user. Finally, we present several examples of short plots and situations generated by the system from the dynamic interaction of artificial actors.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.
{"title":"Face animation based on observed 3D speech dynamics","authors":"Gregor A. Kalberer, L. Gool","doi":"10.1109/CA.2001.982373","DOIUrl":"https://doi.org/10.1109/CA.2001.982373","url":null,"abstract":"Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130262310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}