首页 > 最新文献

Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)最新文献

英文 中文
Visually believable explosions in real time 视觉上可信的实时爆炸
Claude Martins, J. W. Buchanan, J. Amanatides
The paper presents a real-time physically based simulation of object damage and motion due to a blast wave impact. An improved connected voxel model is used to represent the objects. The paper also explores auxiliary visual effects caused by the blast wave that increase visual believability without being rigorously physically based or computationally expensive.
本文提出了一种基于物理的爆炸冲击波冲击下物体损伤和运动的实时仿真方法。使用改进的连通体素模型来表示对象。本文还探讨了由冲击波引起的辅助视觉效果,这些效果可以增加视觉可信度,而无需严格的物理基础或昂贵的计算成本。
{"title":"Visually believable explosions in real time","authors":"Claude Martins, J. W. Buchanan, J. Amanatides","doi":"10.1109/CA.2001.982398","DOIUrl":"https://doi.org/10.1109/CA.2001.982398","url":null,"abstract":"The paper presents a real-time physically based simulation of object damage and motion due to a blast wave impact. An improved connected voxel model is used to represent the objects. The paper also explores auxiliary visual effects caused by the blast wave that increase visual believability without being rigorously physically based or computationally expensive.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Interactive modeling of the human musculature 人体肌肉组织的互动建模
Amaury Aubel, D. Thalmann
In this paper, we extend our previous work (Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000) and propose a muscle model that is suitable for computer graphics based on physiological and anatomical considerations. Muscle motion and deformation is automatically derived from one or several action lines, each action line being deformed by a 1D mass-spring system. The resulting model is fast, can accommodate most superficial human muscles, and could easily be integrated into current modeling packages. Example animations can be found at .
在本文中,我们扩展了我们之前的工作(Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000),并提出了一个基于生理和解剖学考虑的适合计算机图形学的肌肉模型。肌肉运动和变形是由一条或几条动作线自动产生的,每条动作线由一维质量弹簧系统变形。生成的模型速度快,可以适应大多数浅层人体肌肉,并且可以很容易地集成到当前的建模软件包中。示例动画可以在。
{"title":"Interactive modeling of the human musculature","authors":"Amaury Aubel, D. Thalmann","doi":"10.1109/CA.2001.982390","DOIUrl":"https://doi.org/10.1109/CA.2001.982390","url":null,"abstract":"In this paper, we extend our previous work (Proc. Computer Animation and Simulation, pp. 125-135, Aug. 2000) and propose a muscle model that is suitable for computer graphics based on physiological and anatomical considerations. Muscle motion and deformation is automatically derived from one or several action lines, each action line being deformed by a 1D mass-spring system. The resulting model is fast, can accommodate most superficial human muscles, and could easily be integrated into current modeling packages. Example animations can be found at .","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129482536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Analysis and synthesis of facial expressions with hand-generated muscle actuation basis 基于人工肌肉驱动的面部表情分析与合成
Byoungwon Choe, Hyeongseok Ko
We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.
我们提出了一种性能驱动的面部动画系统,用于分析捕获的表情以发现肌肉驱动并将表情与驱动值合成。我们工作的一个明显不同的方法是,我们让艺术家雕刻驱动基础的初稿:与单个肌肉的孤立驱动相对应的基本面部形状,而不是完全计算皮肤表面变形,依靠有限元方法等数学模型。我们通过基元的线性组合来合成表达式,并通过寻找组合的权值来分析表达式。尽管手工生成的驱动基代表了主体特征表达式的本质,但它不够精确,无法用于后续的计算过程。我们还描述了一种迭代算法来提高驱动基的精度。实验结果表明,我们的艺术家-in- The -loop方法比纯数学模型产生更可预测和可控的结果,因此可以成为动画制作中非常有用的工具。
{"title":"Analysis and synthesis of facial expressions with hand-generated muscle actuation basis","authors":"Byoungwon Choe, Hyeongseok Ko","doi":"10.1109/CA.2001.982372","DOIUrl":"https://doi.org/10.1109/CA.2001.982372","url":null,"abstract":"We present a performance-driven facial animation system for analyzing captured expressions to find muscle actuation and synthesizing expressions with the actuation values. A significantly different approach of our work is that we let artists sculpt the initial draft of the actuation basis: the basic facial shapes corresponding to the isolated actuation of individual muscles, instead of calculating skin surface deformation entirely, relying on mathematical models such as finite element methods. We synthesize expressions by linear combinations of the basis elements, and analyze expressions by finding the weights for the combinations. Even though the hand-generated actuation basis represents the essence of the subject's characteristic expressions, it is not accurate enough to be used in the subsequent computational procedures. We also describe an iterative algorithm to increase the accuracy of the actuation basis. The experimental results suggest that our artist-in-the-loop method produces a more predictable and controllable outcome than pure mathematical models, and thus can be a quite useful tool in animation productions.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Merging deformable and rigid body mechanics simulation 合并变形体和刚体力学模拟
J. Jansson, J. Vergeest, G. Kuczogi, I. Horváth
Presents an interface between a deformable body mechanics model and a rigid body mechanics model. What is novel with our approach is that the physical representation in both the models is the same, which ensures behavioral correctness and allows great flexibility. We use a mass-spring representation extended with the concept of volume, and thus contact and collision. All physical interaction occurs between the mass elements only, and thus there is no need for explicit handling of rigid-deformable or rigid-rigid body interaction. This also means that bodies can be partially rigid and partially deformable. It is also possible to change whether part of a body should be rigid or not dynamically. We present a demonstration example, and also possible applications in conceptual design engineering, geometric modeling, as well as computer animation.
给出了可变形体力学模型和刚体力学模型之间的接口。我们的方法的新颖之处在于两个模型中的物理表示是相同的,这确保了行为的正确性并允许很大的灵活性。我们使用一个质量-弹簧表示扩展了体积的概念,因此接触和碰撞。所有的物理相互作用只发生在质量单元之间,因此不需要明确处理刚体-可变形体或刚体-刚体相互作用。这也意味着物体可以是部分刚性和部分可变形的。也可以动态地改变物体的一部分是否应该是刚性的。我们提出了一个示范例子,以及在概念设计工程、几何建模和计算机动画中的可能应用。
{"title":"Merging deformable and rigid body mechanics simulation","authors":"J. Jansson, J. Vergeest, G. Kuczogi, I. Horváth","doi":"10.1109/CA.2001.982388","DOIUrl":"https://doi.org/10.1109/CA.2001.982388","url":null,"abstract":"Presents an interface between a deformable body mechanics model and a rigid body mechanics model. What is novel with our approach is that the physical representation in both the models is the same, which ensures behavioral correctness and allows great flexibility. We use a mass-spring representation extended with the concept of volume, and thus contact and collision. All physical interaction occurs between the mass elements only, and thus there is no need for explicit handling of rigid-deformable or rigid-rigid body interaction. This also means that bodies can be partially rigid and partially deformable. It is also possible to change whether part of a body should be rigid or not dynamically. We present a demonstration example, and also possible applications in conceptual design engineering, geometric modeling, as well as computer animation.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AI-based animation for interactive storytelling 基于人工智能的交互式故事动画
M. Cavazza, Fred Charles, Steven J. Mead
In this paper, we describe a method for implementing AI-based animation of artificial actors in the context of interactive storytelling. We have developed a fully implemented prototype based on the Unreal/sup TM/ game engine and carried out experiments with a simple sitcom-like scenario. We discuss the central role of artificial actors in interactive storytelling and how real-time generation of their behaviour participates to the creation of a dynamic storyline. We follow previous work describing the behaviour of artificial actors through AI planning formalisms, and adapt it to the context of narrative representation. The set of all possible behaviours, accounting for various instantiations of a basic plot, can be represented through an AND/OR graph. A real-time variant of the AO* algorithm can be used to interleave planning and action, thus allowing characters to interact between themselves and with the user. Finally, we present several examples of short plots and situations generated by the system from the dynamic interaction of artificial actors.
在本文中,我们描述了一种在交互式故事背景下实现基于人工智能的人工演员动画的方法。我们基于Unreal/sup TM/游戏引擎开发了一个完全实现的原型,并以一个简单的情景喜剧场景进行了实验。我们将讨论人工角色在互动故事中的核心作用,以及他们的行为是如何实时生成并参与到动态故事情节的创造中。我们遵循之前的工作,通过AI规划形式描述人工行为者的行为,并使其适应叙事表现的背景。所有可能行为的集合,考虑到一个基本图的各种实例,可以通过AND/OR图来表示。AO*算法的实时变体可以用来交织计划和行动,从而允许角色在他们自己之间和与用户互动。最后,我们给出了几个由系统从人工演员的动态交互中生成的短情节和情境的例子。
{"title":"AI-based animation for interactive storytelling","authors":"M. Cavazza, Fred Charles, Steven J. Mead","doi":"10.1109/CA.2001.982384","DOIUrl":"https://doi.org/10.1109/CA.2001.982384","url":null,"abstract":"In this paper, we describe a method for implementing AI-based animation of artificial actors in the context of interactive storytelling. We have developed a fully implemented prototype based on the Unreal/sup TM/ game engine and carried out experiments with a simple sitcom-like scenario. We discuss the central role of artificial actors in interactive storytelling and how real-time generation of their behaviour participates to the creation of a dynamic storyline. We follow previous work describing the behaviour of artificial actors through AI planning formalisms, and adapt it to the context of narrative representation. The set of all possible behaviours, accounting for various instantiations of a basic plot, can be represented through an AND/OR graph. A real-time variant of the AO* algorithm can be used to interleave planning and action, thus allowing characters to interact between themselves and with the user. Finally, we present several examples of short plots and situations generated by the system from the dynamic interaction of artificial actors.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Face animation based on observed 3D speech dynamics 基于观察到的3D语音动态的人脸动画
Gregor A. Kalberer, L. Gool
Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.
逼真的面部动画尤其困难,因为我们都是面部动态感知和解释方面的专家。一种方法是模拟面部解剖。或者,动画可以基于首先观察可见的3D动态,提取基本模式,然后根据所需的性能将这些组合在一起。这是本文所遵循的策略,本文以言语为重点。该方法遵循一种引导过程。首先,3D形状统计数据是从具有相对较少标记的说话面孔中学习的。三维重建产生的时间间隔为1/25秒。面部下半部分的拓扑面具与运动相适应。掩模形状的主成分分析(PCA)降低了掩模形状空间的维数。其结果是双重的。一方面,脸部可以被动画化(在我们的例子中,它可以被制作成说新句子)。另一方面,面部动态可以在没有性能捕捉标记的情况下进行3D跟踪。
{"title":"Face animation based on observed 3D speech dynamics","authors":"Gregor A. Kalberer, L. Gool","doi":"10.1109/CA.2001.982373","DOIUrl":"https://doi.org/10.1109/CA.2001.982373","url":null,"abstract":"Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130262310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1