首页 > 最新文献

Proceedings of the 29th annual conference on Computer graphics and interactive techniques最新文献

英文 中文
Modeling and rendering of realistic feathers 建模和渲染逼真的羽毛
Yanyun Chen, Ying-Qing Xu, B. Guo, H. Shum
We present techniques for realistic modeling and rendering of feathers and birds. Our approach is motivated by the observation that a feather is a branching structure that can be described by an L-system. The parametric L-system we derived allows the user to easily create feathers of different types and shapes by changing a few parameters. The randomness in feather geometry is also incorporated into this L-system. To render a feather realistically, we have derived an efficient form of the bidirectional texture function (BTF), which describes the small but visible geometry details on the feather blade. A rendering algorithm combining the L-system and the BTF displays feathers photorealistically while capitalizing on graphics hardware for efficiency. Based on this framework of feather modeling and rendering, we developed a system that can automatically generate appropriate feathers to cover different parts of a bird's body from a few "key feathers" supplied by the user, and produce realistic renderings of the bird.
我们提出了逼真的建模和渲染羽毛和鸟类的技术。我们的方法的动机是观察到羽毛是一个分支结构,可以用l系统来描述。我们导出的参数l系统允许用户通过改变几个参数轻松创建不同类型和形状的羽毛。羽毛几何的随机性也被纳入到这个l系统中。为了逼真地呈现羽毛,我们推导了一种有效的双向纹理函数(BTF),它描述了羽毛叶片上微小但可见的几何细节。结合l系统和BTF的渲染算法逼真地显示羽毛,同时利用图形硬件提高效率。基于这个羽毛建模和渲染框架,我们开发了一个系统,该系统可以从用户提供的几根“关键羽毛”中自动生成覆盖鸟类身体不同部位的适当羽毛,并生成逼真的鸟类渲染图。
{"title":"Modeling and rendering of realistic feathers","authors":"Yanyun Chen, Ying-Qing Xu, B. Guo, H. Shum","doi":"10.1145/566570.566628","DOIUrl":"https://doi.org/10.1145/566570.566628","url":null,"abstract":"We present techniques for realistic modeling and rendering of feathers and birds. Our approach is motivated by the observation that a feather is a branching structure that can be described by an L-system. The parametric L-system we derived allows the user to easily create feathers of different types and shapes by changing a few parameters. The randomness in feather geometry is also incorporated into this L-system. To render a feather realistically, we have derived an efficient form of the bidirectional texture function (BTF), which describes the small but visible geometry details on the feather blade. A rendering algorithm combining the L-system and the BTF displays feathers photorealistically while capitalizing on graphics hardware for efficiency. Based on this framework of feather modeling and rendering, we developed a system that can automatically generate appropriate feathers to cover different parts of a bird's body from a few \"key feathers\" supplied by the user, and produce realistic renderings of the bird.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115290896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
CHARMS: a simple framework for adaptive simulation 适应性模拟的简单框架
E. Grinspun, P. Krysl, P. Schröder
Finite element solvers are a basic component of simulation applications; they are common in computer graphics, engineering, and medical simulations. Although adaptive solvers can be of great value in reducing the often high computational cost of simulations they are not employed broadly. Indeed, building adaptive solvers can be a daunting task especially for 3D finite elements. In this paper we are introducing a new approach to produce conforming, hierarchical, adaptive refinement methods (CHARMS). The basic principle of our approach is to refine basis functions, not elements. This removes a number of implementation headaches associated with other approaches and is a general technique independent of domain dimension (here 2D and 3D), element type (e.g., triangle, quad, tetrahedron, hexahedron), and basis function order (piece-wise linear, higher order B-splines, Loop subdivision, etc.). The (un-)refinement algorithms are simple and require little in terms of data structure support. We demonstrate the versatility of our new approach through 2D and 3D examples, including medical applications and thin-shell animations.
有限元求解器是仿真应用的基本组成部分;它们在计算机图形学、工程学和医学模拟中很常见。尽管自适应解算器在降低通常较高的模拟计算成本方面具有很大的价值,但它们并没有得到广泛的应用。事实上,构建自适应求解器可能是一项艰巨的任务,特别是对于3D有限元素。在本文中,我们介绍了一种新的方法来产生一致的、分层的、自适应的细化方法(CHARMS)。我们方法的基本原则是精炼基函数,而不是元素。这消除了许多与其他方法相关的实现难题,并且是一种独立于域维度(这里是2D和3D)、元素类型(例如三角形、四面体、四面体)和基函数顺序(分段线性、高阶b样条、循环细分等)的通用技术。(非)精化算法很简单,在数据结构支持方面要求很少。我们通过2D和3D示例展示了我们新方法的多功能性,包括医疗应用和薄壳动画。
{"title":"CHARMS: a simple framework for adaptive simulation","authors":"E. Grinspun, P. Krysl, P. Schröder","doi":"10.1145/566570.566578","DOIUrl":"https://doi.org/10.1145/566570.566578","url":null,"abstract":"Finite element solvers are a basic component of simulation applications; they are common in computer graphics, engineering, and medical simulations. Although adaptive solvers can be of great value in reducing the often high computational cost of simulations they are not employed broadly. Indeed, building adaptive solvers can be a daunting task especially for 3D finite elements. In this paper we are introducing a new approach to produce conforming, hierarchical, adaptive refinement methods (CHARMS). The basic principle of our approach is to refine basis functions, not elements. This removes a number of implementation headaches associated with other approaches and is a general technique independent of domain dimension (here 2D and 3D), element type (e.g., triangle, quad, tetrahedron, hexahedron), and basis function order (piece-wise linear, higher order B-splines, Loop subdivision, etc.). The (un-)refinement algorithms are simple and require little in terms of data structure support. We demonstrate the versatility of our new approach through 2D and 3D examples, including medical applications and thin-shell animations.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128036838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 383
Synthesis of bidirectional texture functions on arbitrary surfaces 任意曲面上双向纹理函数的合成
Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, B. Guo, H. Shum
The bidirectional texture function (BTF) is a 6D function that can describe textures arising from both spatially-variant surface reflectance and surface mesostructures. In this paper, we present an algorithm for synthesizing the BTF on an arbitrary surface from a sample BTF. A main challenge in surface BTF synthesis is the requirement of a consistent mesostructure on the surface, and to achieve that we must handle the large amount of data in a BTF sample. Our algorithm performs BTF synthesis based on surface textons, which extract essential information from the sample BTF to facilitate the synthesis. We also describe a general search strategy, called the k-coherent search, for fast BTF synthesis using surface textons. A BTF synthesized using our algorithm not only looks similar to the BTF sample in all viewing/lighthing conditions but also exhibits a consistent mesostructure when viewing and lighting directions change. Moreover, the synthesized BTF fits the target surface naturally and seamlessly. We demonstrate the effectiveness of our algorithm with sample BTFs from various sources, including those measured from real-world textures.
双向纹理函数(bidirectional texture function, BTF)是一种可以描述由空间变化的表面反射率和表面细观结构产生的纹理的6D函数。在本文中,我们提出了一种从样本BTF合成任意表面上的BTF的算法。表面BTF合成的一个主要挑战是要求表面具有一致的介观结构,为了实现这一目标,我们必须处理BTF样本中的大量数据。我们的算法基于表面文本进行BTF合成,从样本BTF中提取必要的信息以促进合成。我们还描述了一种通用的搜索策略,称为k相干搜索,用于使用表面文本快速合成BTF。使用我们的算法合成的BTF不仅在所有观看/光照条件下看起来与BTF样本相似,而且在观看和光照方向改变时也表现出一致的介观结构。此外,合成的BTF与目标表面的贴合自然、无缝。我们用来自各种来源的样本btf证明了算法的有效性,包括来自真实世界纹理的样本btf。
{"title":"Synthesis of bidirectional texture functions on arbitrary surfaces","authors":"Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, B. Guo, H. Shum","doi":"10.1145/566570.566634","DOIUrl":"https://doi.org/10.1145/566570.566634","url":null,"abstract":"The bidirectional texture function (BTF) is a 6D function that can describe textures arising from both spatially-variant surface reflectance and surface mesostructures. In this paper, we present an algorithm for synthesizing the BTF on an arbitrary surface from a sample BTF. A main challenge in surface BTF synthesis is the requirement of a consistent mesostructure on the surface, and to achieve that we must handle the large amount of data in a BTF sample. Our algorithm performs BTF synthesis based on surface textons, which extract essential information from the sample BTF to facilitate the synthesis. We also describe a general search strategy, called the k-coherent search, for fast BTF synthesis using surface textons. A BTF synthesized using our algorithm not only looks similar to the BTF sample in all viewing/lighthing conditions but also exhibits a consistent mesostructure when viewing and lighting directions change. Moreover, the synthesized BTF fits the target surface naturally and seamlessly. We demonstrate the effectiveness of our algorithm with sample BTFs from various sources, including those measured from real-world textures.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126929802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 301
Interactive global illumination in dynamic scenes 动态场景中的交互式全局照明
P. Tole, F. Pellacini, B. Walter, D. Greenberg
In this paper, we present a system for interactive computation of global illumination in dynamic scenes. Our system uses a novel scheme for caching the results of a high quality pixel-based renderer such as a bidirectional path tracer. The Shading Cache is an object-space hierarchical subdivision mesh with lazily computed shading values at its vertices. A high frame rate display is generated from the Shading Cache using hardware-based interpolation and texture mapping. An image space sampling scheme refines the Shading Cache in regions that have the most interpolation error or those that are most likely to be affected by object or camera motion.Our system handles dynamic scenes and moving light sources efficiently, providing useful feedback within a few seconds and high quality images within a few tens of seconds, without the need for any pre-computation. Our approach allows us to significantly outperform other interactive systems based on caching ray-tracing samples, especially in dynamic scenes. Based on our results, we believe that the Shading Cache will be an invaluable tool in lighting design and modelling while rendering.
本文提出了一种动态场景中全局照明的交互式计算系统。我们的系统使用一种新颖的方案来缓存高质量的基于像素的渲染器(如双向路径跟踪器)的结果。着色缓存是一个对象空间分层细分网格,在其顶点处延迟计算着色值。使用基于硬件的插值和纹理映射从着色缓存生成高帧率显示。图像空间采样方案在具有最大插值误差或最可能受物体或相机运动影响的区域中细化阴影缓存。我们的系统有效地处理动态场景和移动光源,在几秒钟内提供有用的反馈,在几十秒内提供高质量的图像,而不需要任何预计算。我们的方法使我们能够显著优于其他基于缓存光线追踪样本的交互系统,特别是在动态场景中。基于我们的结果,我们相信阴影缓存将是一个宝贵的工具,在照明设计和建模,而渲染。
{"title":"Interactive global illumination in dynamic scenes","authors":"P. Tole, F. Pellacini, B. Walter, D. Greenberg","doi":"10.1145/566570.566613","DOIUrl":"https://doi.org/10.1145/566570.566613","url":null,"abstract":"In this paper, we present a system for interactive computation of global illumination in dynamic scenes. Our system uses a novel scheme for caching the results of a high quality pixel-based renderer such as a bidirectional path tracer. The Shading Cache is an object-space hierarchical subdivision mesh with lazily computed shading values at its vertices. A high frame rate display is generated from the Shading Cache using hardware-based interpolation and texture mapping. An image space sampling scheme refines the Shading Cache in regions that have the most interpolation error or those that are most likely to be affected by object or camera motion.Our system handles dynamic scenes and moving light sources efficiently, providing useful feedback within a few seconds and high quality images within a few tens of seconds, without the need for any pre-computation. Our approach allows us to significantly outperform other interactive systems based on caching ray-tracing samples, especially in dynamic scenes. Based on our results, we believe that the Shading Cache will be an invaluable tool in lighting design and modelling while rendering.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131486497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 120
Motion texture: a two-level statistical model for character motion synthesis 运动纹理:用于角色运动合成的两级统计模型
Yan Li, Tianshu Wang, H. Shum
In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.
在本文中,我们描述了一种称为运动纹理的新技术,用于合成复杂的人体运动(例如跳舞),该技术在统计上与原始运动捕获数据相似。我们将运动纹理定义为一组运动纹理及其分布,这些纹理表征了捕获运动的随机性和动态性。具体来说,运动纹理是由线性动态系统(LDS)建模的,而纹理分布是由一个过渡矩阵表示的,该矩阵表示每个纹理切换到另一个纹理的可能性。我们设计了一种最大似然算法,从捕获的舞蹈动作中学习动作文本及其关系。学习到的运动纹理可以用来自动生成新的动画和/或交互式编辑动画序列。最有趣的是,运动纹理可以在不同的关卡中进行操作,无论是通过在纹理级别上改变特定运动的细节,还是通过在分布级别上设计新的编排。我们的方法是由许多合成序列视觉上引人注目的舞蹈动作证明。
{"title":"Motion texture: a two-level statistical model for character motion synthesis","authors":"Yan Li, Tianshu Wang, H. Shum","doi":"10.1145/566570.566604","DOIUrl":"https://doi.org/10.1145/566570.566604","url":null,"abstract":"In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126442633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 505
Painting and rendering textures on unparameterized models 在未参数化的模型上绘制和渲染纹理
D. DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins
This paper presents a solution for texture mapping unparameterized models. The quality of a texture on a model is often limited by the model's parameterization into a 2D texture space. For models with complex topologies or complex distributions of structural detail, finding this parameterization can be very difficult and usually must be performed manually through a slow iterative process between the modeler and texture painter. This is especially true of models which carry no natural parameterizations, such as subdivision surfaces or models acquired from 3D scanners. Instead, we remove the 2D parameterization and store the texture in 3D space as a sparse, adaptive octree. Because no parameterization is necessary, textures can be painted on any surface that can be rendered. No mappings between disparate topologies are used, so texture artifacts such as seams and stretching do not exist. Because this method is adaptive, detail is created in the map only where required by the texture painter, conserving memory usage.
提出了一种非参数化模型纹理映射的解决方案。模型上的纹理质量通常受到模型参数化到二维纹理空间的限制。对于具有复杂拓扑结构或结构细节复杂分布的模型,找到这种参数化可能非常困难,通常必须通过建模器和纹理绘制器之间缓慢的迭代过程手动执行。对于没有自然参数化的模型尤其如此,例如从3D扫描仪获得的细分表面或模型。相反,我们删除了2D参数化,并将纹理存储在3D空间中,作为一个稀疏的自适应八叉树。因为不需要参数化,所以可以在任何可以渲染的表面上绘制纹理。不使用不同拓扑之间的映射,因此不存在接缝和拉伸等纹理工件。因为这种方法是自适应的,细节只在纹理画家需要的地方创建,节省内存使用。
{"title":"Painting and rendering textures on unparameterized models","authors":"D. DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins","doi":"10.1145/566570.566649","DOIUrl":"https://doi.org/10.1145/566570.566649","url":null,"abstract":"This paper presents a solution for texture mapping unparameterized models. The quality of a texture on a model is often limited by the model's parameterization into a 2D texture space. For models with complex topologies or complex distributions of structural detail, finding this parameterization can be very difficult and usually must be performed manually through a slow iterative process between the modeler and texture painter. This is especially true of models which carry no natural parameterizations, such as subdivision surfaces or models acquired from 3D scanners. Instead, we remove the 2D parameterization and store the texture in 3D space as a sparse, adaptive octree. Because no parameterization is necessary, textures can be painted on any surface that can be rendered. No mappings between disparate topologies are used, so texture artifacts such as seams and stretching do not exist. Because this method is adaptive, detail is created in the map only where required by the texture painter, conserving memory usage.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131735410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Integrated learning for interactive synthetic characters 交互式合成人物的综合学习
B. Blumberg, Marc Downie, Y. Ivanov, Matt Berlin, M. P. Johnson, Bill Tomlinson
The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simplifies the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called "clicker training". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.
对于交互式合成角色来说,学习能力是一种潜在的、引人注目的重要品质。为此,我们描述了一种实用的合成字符实时学习方法。我们的实现以强化学习技术为基础,并借鉴了动物训练的见解。它简化了字符的学习任务,通过(a)使它们能够利用其世界中的可预测规律,(b)允许它们最大限度地利用任何监督信号,以及(c)使它们易于由人类训练。我们建立了一个自主的动画狗,它可以用一种叫做“点击训练”的技术来训练真正的狗。所展示的能力包括训练识别和使用声音模式作为动作的线索,以及从新的路径通过其运动空间合成新的动作。本文的一个关键贡献是证明通过同时解决状态、动作和状态-动作空间发现这三个问题,每个问题的解决方案变得更容易。最后,我们阐明了启发式和设计原则,使合成汉字的学习具有实用性。
{"title":"Integrated learning for interactive synthetic characters","authors":"B. Blumberg, Marc Downie, Y. Ivanov, Matt Berlin, M. P. Johnson, Bill Tomlinson","doi":"10.1145/566570.566597","DOIUrl":"https://doi.org/10.1145/566570.566597","url":null,"abstract":"The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simplifies the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called \"clicker training\". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133339302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 235
Interactive control of avatars animated with human motion data 交互式控制的化身动画与人类的运动数据
Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, J. Hodgins, N. Pollard
Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.
在计算机游戏和虚拟环境中,三维角色的实时控制是一个重要的问题。然而,角色动画和控制是困难的,因为必须提供大量的角色行为,并且用户必须能够从这些行为中进行选择,可能是使用低维输入设备。获得丰富的角色行为集的一个吸引人的方法是收集适合于应用程序的扩展的、未标记的运动数据序列。在本文中,我们证明了这样的运动数据库可以进行预处理,以提高行为的灵活性和高效的搜索,并用于实时的化身控制。通过识别运动段之间的合理过渡来创造灵活性,并通过聚类获得对结果图结构的有效搜索。本文演示了三种使用该数据结构来控制角色运动的界面技术:用户从一组可用的选择中进行选择,在环境中绘制路径,或者在摄像机前执行所需的动作。我们通过四种不同的应用程序展示了这种方法的灵活性,并将化身的运动与直接记录的人类运动进行了比较。
{"title":"Interactive control of avatars animated with human motion data","authors":"Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, J. Hodgins, N. Pollard","doi":"10.1145/566570.566607","DOIUrl":"https://doi.org/10.1145/566570.566607","url":null,"abstract":"Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116266860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1066
Shader-driven compilation of rendering assets 着色器驱动的渲染资源编译
P. Lalonde, Eric Schenk
Rendering performance of consumer graphics hardware benefits from pre-processing geometric data into a form targeted to the underlying API and hardware. The various elements of geometric data are then coupled with a shading program at runtime to draw the asset.In this paper we describe a system in which pre-processing is done in a compilation process in which the geometric data are processed with knowledge of their shading programs. The data are converted into structures targeted directly to the hardware, and a code stream is assembled that describes the manipulations required to render these data structures. Our compiler is structured like a traditional code compiler, with a front end that reads the geometric data and attributes (hereafter referred to as an art asset) output from a 3D modeling package and shaders in a platform independent form and performs platform-independent optimizations, and a back end that performs platform-specific optimizations and generates platform-targeted data structures and code streams.Our compiler back-end has been targeted to four platforms, three of which are radically different from one another. On all platforms the rendering performance of our compiled assets, used in real situations, is well above that of hand-coded assets.
消费者图形硬件的渲染性能得益于将几何数据预处理成针对底层API和硬件的形式。然后在运行时将几何数据的各种元素与着色程序耦合以绘制资产。在本文中,我们描述了一个系统,在这个系统中,预处理是在一个编译过程中完成的,在这个编译过程中,几何数据是用它们的着色程序的知识来处理的。将数据转换为直接针对硬件的结构,并组装一个代码流,该代码流描述呈现这些数据结构所需的操作。我们的编译器的结构就像一个传统的代码编译器,前端读取几何数据和属性(以下简称为艺术资产)输出从3D建模包和着色器在一个平台独立的形式,并执行平台独立的优化,和后端执行平台特定的优化,并生成针对平台的数据结构和代码流。我们的编译器后端针对四个平台,其中三个平台彼此完全不同。在所有平台上,我们编译的资源的渲染性能,在实际情况下使用,远远高于手工编码的资源。
{"title":"Shader-driven compilation of rendering assets","authors":"P. Lalonde, Eric Schenk","doi":"10.1145/566570.566641","DOIUrl":"https://doi.org/10.1145/566570.566641","url":null,"abstract":"Rendering performance of consumer graphics hardware benefits from pre-processing geometric data into a form targeted to the underlying API and hardware. The various elements of geometric data are then coupled with a shading program at runtime to draw the asset.In this paper we describe a system in which pre-processing is done in a compilation process in which the geometric data are processed with knowledge of their shading programs. The data are converted into structures targeted directly to the hardware, and a code stream is assembled that describes the manipulations required to render these data structures. Our compiler is structured like a traditional code compiler, with a front end that reads the geometric data and attributes (hereafter referred to as an art asset) output from a 3D modeling package and shaders in a platform independent form and performs platform-independent optimizations, and a back end that performs platform-specific optimizations and generates platform-targeted data structures and code streams.Our compiler back-end has been targeted to four platforms, three of which are radically different from one another. On all platforms the rendering performance of our compiled assets, used in real situations, is well above that of hand-coded assets.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131824606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
WYSIWYG NPR: drawing strokes directly on 3D models WYSIWYG NPR:直接在3D模型上绘制笔触
Robert D. Kalnins, L. Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee, Phillip L. Davidson, Matthew Webb, J. Hughes, Adam Finkelstein
We present a system that lets a designer directly annotate a 3D model with strokes, imparting a personal aesthetic to the non-photorealistic rendering of the object. The artist chooses a "brush" style, then draws strokes over the model from one or more viewpoints. When the system renders the scene from any new viewpoint, it adapts the number and placement of the strokes appropriately to maintain the original look.
我们提出了一个系统,可以让设计师直接用笔画注释3D模型,将个人审美赋予对象的非真实感渲染。艺术家选择一种“笔刷”风格,然后从一个或多个视点在模型上绘制笔画。当系统从任何新的视点渲染场景时,它会适当地调整笔画的数量和位置,以保持原始的外观。
{"title":"WYSIWYG NPR: drawing strokes directly on 3D models","authors":"Robert D. Kalnins, L. Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee, Phillip L. Davidson, Matthew Webb, J. Hughes, Adam Finkelstein","doi":"10.1145/566570.566648","DOIUrl":"https://doi.org/10.1145/566570.566648","url":null,"abstract":"We present a system that lets a designer directly annotate a 3D model with strokes, imparting a personal aesthetic to the non-photorealistic rendering of the object. The artist chooses a \"brush\" style, then draws strokes over the model from one or more viewpoints. When the system renders the scene from any new viewpoint, it adapts the number and placement of the strokes appropriately to maintain the original look.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127080390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 300
期刊
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1