We present techniques for realistic modeling and rendering of feathers and birds. Our approach is motivated by the observation that a feather is a branching structure that can be described by an L-system. The parametric L-system we derived allows the user to easily create feathers of different types and shapes by changing a few parameters. The randomness in feather geometry is also incorporated into this L-system. To render a feather realistically, we have derived an efficient form of the bidirectional texture function (BTF), which describes the small but visible geometry details on the feather blade. A rendering algorithm combining the L-system and the BTF displays feathers photorealistically while capitalizing on graphics hardware for efficiency. Based on this framework of feather modeling and rendering, we developed a system that can automatically generate appropriate feathers to cover different parts of a bird's body from a few "key feathers" supplied by the user, and produce realistic renderings of the bird.
{"title":"Modeling and rendering of realistic feathers","authors":"Yanyun Chen, Ying-Qing Xu, B. Guo, H. Shum","doi":"10.1145/566570.566628","DOIUrl":"https://doi.org/10.1145/566570.566628","url":null,"abstract":"We present techniques for realistic modeling and rendering of feathers and birds. Our approach is motivated by the observation that a feather is a branching structure that can be described by an L-system. The parametric L-system we derived allows the user to easily create feathers of different types and shapes by changing a few parameters. The randomness in feather geometry is also incorporated into this L-system. To render a feather realistically, we have derived an efficient form of the bidirectional texture function (BTF), which describes the small but visible geometry details on the feather blade. A rendering algorithm combining the L-system and the BTF displays feathers photorealistically while capitalizing on graphics hardware for efficiency. Based on this framework of feather modeling and rendering, we developed a system that can automatically generate appropriate feathers to cover different parts of a bird's body from a few \"key feathers\" supplied by the user, and produce realistic renderings of the bird.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115290896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finite element solvers are a basic component of simulation applications; they are common in computer graphics, engineering, and medical simulations. Although adaptive solvers can be of great value in reducing the often high computational cost of simulations they are not employed broadly. Indeed, building adaptive solvers can be a daunting task especially for 3D finite elements. In this paper we are introducing a new approach to produce conforming, hierarchical, adaptive refinement methods (CHARMS). The basic principle of our approach is to refine basis functions, not elements. This removes a number of implementation headaches associated with other approaches and is a general technique independent of domain dimension (here 2D and 3D), element type (e.g., triangle, quad, tetrahedron, hexahedron), and basis function order (piece-wise linear, higher order B-splines, Loop subdivision, etc.). The (un-)refinement algorithms are simple and require little in terms of data structure support. We demonstrate the versatility of our new approach through 2D and 3D examples, including medical applications and thin-shell animations.
{"title":"CHARMS: a simple framework for adaptive simulation","authors":"E. Grinspun, P. Krysl, P. Schröder","doi":"10.1145/566570.566578","DOIUrl":"https://doi.org/10.1145/566570.566578","url":null,"abstract":"Finite element solvers are a basic component of simulation applications; they are common in computer graphics, engineering, and medical simulations. Although adaptive solvers can be of great value in reducing the often high computational cost of simulations they are not employed broadly. Indeed, building adaptive solvers can be a daunting task especially for 3D finite elements. In this paper we are introducing a new approach to produce conforming, hierarchical, adaptive refinement methods (CHARMS). The basic principle of our approach is to refine basis functions, not elements. This removes a number of implementation headaches associated with other approaches and is a general technique independent of domain dimension (here 2D and 3D), element type (e.g., triangle, quad, tetrahedron, hexahedron), and basis function order (piece-wise linear, higher order B-splines, Loop subdivision, etc.). The (un-)refinement algorithms are simple and require little in terms of data structure support. We demonstrate the versatility of our new approach through 2D and 3D examples, including medical applications and thin-shell animations.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128036838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, B. Guo, H. Shum
The bidirectional texture function (BTF) is a 6D function that can describe textures arising from both spatially-variant surface reflectance and surface mesostructures. In this paper, we present an algorithm for synthesizing the BTF on an arbitrary surface from a sample BTF. A main challenge in surface BTF synthesis is the requirement of a consistent mesostructure on the surface, and to achieve that we must handle the large amount of data in a BTF sample. Our algorithm performs BTF synthesis based on surface textons, which extract essential information from the sample BTF to facilitate the synthesis. We also describe a general search strategy, called the k-coherent search, for fast BTF synthesis using surface textons. A BTF synthesized using our algorithm not only looks similar to the BTF sample in all viewing/lighthing conditions but also exhibits a consistent mesostructure when viewing and lighting directions change. Moreover, the synthesized BTF fits the target surface naturally and seamlessly. We demonstrate the effectiveness of our algorithm with sample BTFs from various sources, including those measured from real-world textures.
{"title":"Synthesis of bidirectional texture functions on arbitrary surfaces","authors":"Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, B. Guo, H. Shum","doi":"10.1145/566570.566634","DOIUrl":"https://doi.org/10.1145/566570.566634","url":null,"abstract":"The bidirectional texture function (BTF) is a 6D function that can describe textures arising from both spatially-variant surface reflectance and surface mesostructures. In this paper, we present an algorithm for synthesizing the BTF on an arbitrary surface from a sample BTF. A main challenge in surface BTF synthesis is the requirement of a consistent mesostructure on the surface, and to achieve that we must handle the large amount of data in a BTF sample. Our algorithm performs BTF synthesis based on surface textons, which extract essential information from the sample BTF to facilitate the synthesis. We also describe a general search strategy, called the k-coherent search, for fast BTF synthesis using surface textons. A BTF synthesized using our algorithm not only looks similar to the BTF sample in all viewing/lighthing conditions but also exhibits a consistent mesostructure when viewing and lighting directions change. Moreover, the synthesized BTF fits the target surface naturally and seamlessly. We demonstrate the effectiveness of our algorithm with sample BTFs from various sources, including those measured from real-world textures.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126929802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a system for interactive computation of global illumination in dynamic scenes. Our system uses a novel scheme for caching the results of a high quality pixel-based renderer such as a bidirectional path tracer. The Shading Cache is an object-space hierarchical subdivision mesh with lazily computed shading values at its vertices. A high frame rate display is generated from the Shading Cache using hardware-based interpolation and texture mapping. An image space sampling scheme refines the Shading Cache in regions that have the most interpolation error or those that are most likely to be affected by object or camera motion.Our system handles dynamic scenes and moving light sources efficiently, providing useful feedback within a few seconds and high quality images within a few tens of seconds, without the need for any pre-computation. Our approach allows us to significantly outperform other interactive systems based on caching ray-tracing samples, especially in dynamic scenes. Based on our results, we believe that the Shading Cache will be an invaluable tool in lighting design and modelling while rendering.
{"title":"Interactive global illumination in dynamic scenes","authors":"P. Tole, F. Pellacini, B. Walter, D. Greenberg","doi":"10.1145/566570.566613","DOIUrl":"https://doi.org/10.1145/566570.566613","url":null,"abstract":"In this paper, we present a system for interactive computation of global illumination in dynamic scenes. Our system uses a novel scheme for caching the results of a high quality pixel-based renderer such as a bidirectional path tracer. The Shading Cache is an object-space hierarchical subdivision mesh with lazily computed shading values at its vertices. A high frame rate display is generated from the Shading Cache using hardware-based interpolation and texture mapping. An image space sampling scheme refines the Shading Cache in regions that have the most interpolation error or those that are most likely to be affected by object or camera motion.Our system handles dynamic scenes and moving light sources efficiently, providing useful feedback within a few seconds and high quality images within a few tens of seconds, without the need for any pre-computation. Our approach allows us to significantly outperform other interactive systems based on caching ray-tracing samples, especially in dynamic scenes. Based on our results, we believe that the Shading Cache will be an invaluable tool in lighting design and modelling while rendering.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131486497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.
{"title":"Motion texture: a two-level statistical model for character motion synthesis","authors":"Yan Li, Tianshu Wang, H. Shum","doi":"10.1145/566570.566604","DOIUrl":"https://doi.org/10.1145/566570.566604","url":null,"abstract":"In this paper, we describe a novel technique, called motion texture, for synthesizing complex human-figure motion (e.g., dancing) that is statistically similar to the original motion captured data. We define motion texture as a set of motion textons and their distribution, which characterize the stochastic and dynamic nature of the captured motion. Specifically, a motion texton is modeled by a linear dynamic system (LDS) while the texton distribution is represented by a transition matrix indicating how likely each texton is switched to another. We have designed a maximum likelihood algorithm to learn the motion textons and their relationship from the captured dance motion. The learnt motion texture can then be used to generate new animations automatically and/or edit animation sequences interactively. Most interestingly, motion texture can be manipulated at different levels, either by changing the fine details of a specific motion at the texton level or by designing a new choreography at the distribution level. Our approach is demonstrated by many synthesized sequences of visually compelling dance motion.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126442633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins
This paper presents a solution for texture mapping unparameterized models. The quality of a texture on a model is often limited by the model's parameterization into a 2D texture space. For models with complex topologies or complex distributions of structural detail, finding this parameterization can be very difficult and usually must be performed manually through a slow iterative process between the modeler and texture painter. This is especially true of models which carry no natural parameterizations, such as subdivision surfaces or models acquired from 3D scanners. Instead, we remove the 2D parameterization and store the texture in 3D space as a sparse, adaptive octree. Because no parameterization is necessary, textures can be painted on any surface that can be rendered. No mappings between disparate topologies are used, so texture artifacts such as seams and stretching do not exist. Because this method is adaptive, detail is created in the map only where required by the texture painter, conserving memory usage.
{"title":"Painting and rendering textures on unparameterized models","authors":"D. DeBry, Jonathan Gibbs, Devorah DeLeon Petty, Nate Robins","doi":"10.1145/566570.566649","DOIUrl":"https://doi.org/10.1145/566570.566649","url":null,"abstract":"This paper presents a solution for texture mapping unparameterized models. The quality of a texture on a model is often limited by the model's parameterization into a 2D texture space. For models with complex topologies or complex distributions of structural detail, finding this parameterization can be very difficult and usually must be performed manually through a slow iterative process between the modeler and texture painter. This is especially true of models which carry no natural parameterizations, such as subdivision surfaces or models acquired from 3D scanners. Instead, we remove the 2D parameterization and store the texture in 3D space as a sparse, adaptive octree. Because no parameterization is necessary, textures can be painted on any surface that can be rendered. No mappings between disparate topologies are used, so texture artifacts such as seams and stretching do not exist. Because this method is adaptive, detail is created in the map only where required by the texture painter, conserving memory usage.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131735410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Blumberg, Marc Downie, Y. Ivanov, Matt Berlin, M. P. Johnson, Bill Tomlinson
The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simplifies the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called "clicker training". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.
{"title":"Integrated learning for interactive synthetic characters","authors":"B. Blumberg, Marc Downie, Y. Ivanov, Matt Berlin, M. P. Johnson, Bill Tomlinson","doi":"10.1145/566570.566597","DOIUrl":"https://doi.org/10.1145/566570.566597","url":null,"abstract":"The ability to learn is a potentially compelling and important quality for interactive synthetic characters. To that end, we describe a practical approach to real-time learning for synthetic characters. Our implementation is grounded in the techniques of reinforcement learning and informed by insights from animal training. It simplifies the learning task for characters by (a) enabling them to take advantage of predictable regularities in their world, (b) allowing them to make maximal use of any supervisory signals, and (c) making them easy to train by humans.We built an autonomous animated dog that can be trained with a technique used to train real dogs called \"clicker training\". Capabilities demonstrated include being trained to recognize and use acoustic patterns as cues for actions, as well as to synthesize new actions from novel paths through its motion space.A key contribution of this paper is to demonstrate that by addressing the three problems of state, action, and state-action space discovery at the same time, the solution for each becomes easier. Finally, we articulate heuristics and design principles that make learning practical for synthetic characters.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133339302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, J. Hodgins, N. Pollard
Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.
{"title":"Interactive control of avatars animated with human motion data","authors":"Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, J. Hodgins, N. Pollard","doi":"10.1145/566570.566607","DOIUrl":"https://doi.org/10.1145/566570.566607","url":null,"abstract":"Real-time control of three-dimensional avatars is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. One appealing approach to obtaining a rich set of avatar behaviors is to collect an extended, unlabeled sequence of motion data appropriate to the application. In this paper, we show that such a motion database can be preprocessed for flexibility in behavior and efficient search and exploited for real-time avatar control. Flexibility is created by identifying plausible transitions between motion segments, and efficient search through the resulting graph structure is obtained through clustering. Three interface techniques are demonstrated for controlling avatar motion using this data structure: the user selects from a set of available choices, sketches a path through an environment, or acts out a desired motion in front of a video camera. We demonstrate the flexibility of the approach through four different applications and compare the avatar motion to directly recorded human motion.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116266860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rendering performance of consumer graphics hardware benefits from pre-processing geometric data into a form targeted to the underlying API and hardware. The various elements of geometric data are then coupled with a shading program at runtime to draw the asset.In this paper we describe a system in which pre-processing is done in a compilation process in which the geometric data are processed with knowledge of their shading programs. The data are converted into structures targeted directly to the hardware, and a code stream is assembled that describes the manipulations required to render these data structures. Our compiler is structured like a traditional code compiler, with a front end that reads the geometric data and attributes (hereafter referred to as an art asset) output from a 3D modeling package and shaders in a platform independent form and performs platform-independent optimizations, and a back end that performs platform-specific optimizations and generates platform-targeted data structures and code streams.Our compiler back-end has been targeted to four platforms, three of which are radically different from one another. On all platforms the rendering performance of our compiled assets, used in real situations, is well above that of hand-coded assets.
{"title":"Shader-driven compilation of rendering assets","authors":"P. Lalonde, Eric Schenk","doi":"10.1145/566570.566641","DOIUrl":"https://doi.org/10.1145/566570.566641","url":null,"abstract":"Rendering performance of consumer graphics hardware benefits from pre-processing geometric data into a form targeted to the underlying API and hardware. The various elements of geometric data are then coupled with a shading program at runtime to draw the asset.In this paper we describe a system in which pre-processing is done in a compilation process in which the geometric data are processed with knowledge of their shading programs. The data are converted into structures targeted directly to the hardware, and a code stream is assembled that describes the manipulations required to render these data structures. Our compiler is structured like a traditional code compiler, with a front end that reads the geometric data and attributes (hereafter referred to as an art asset) output from a 3D modeling package and shaders in a platform independent form and performs platform-independent optimizations, and a back end that performs platform-specific optimizations and generates platform-targeted data structures and code streams.Our compiler back-end has been targeted to four platforms, three of which are radically different from one another. On all platforms the rendering performance of our compiled assets, used in real situations, is well above that of hand-coded assets.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131824606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert D. Kalnins, L. Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee, Phillip L. Davidson, Matthew Webb, J. Hughes, Adam Finkelstein
We present a system that lets a designer directly annotate a 3D model with strokes, imparting a personal aesthetic to the non-photorealistic rendering of the object. The artist chooses a "brush" style, then draws strokes over the model from one or more viewpoints. When the system renders the scene from any new viewpoint, it adapts the number and placement of the strokes appropriately to maintain the original look.
{"title":"WYSIWYG NPR: drawing strokes directly on 3D models","authors":"Robert D. Kalnins, L. Markosian, Barbara J. Meier, Michael A. Kowalski, Joseph C. Lee, Phillip L. Davidson, Matthew Webb, J. Hughes, Adam Finkelstein","doi":"10.1145/566570.566648","DOIUrl":"https://doi.org/10.1145/566570.566648","url":null,"abstract":"We present a system that lets a designer directly annotate a 3D model with strokes, imparting a personal aesthetic to the non-photorealistic rendering of the object. The artist chooses a \"brush\" style, then draws strokes over the model from one or more viewpoints. When the system renders the scene from any new viewpoint, it adapts the number and placement of the strokes appropriately to maintain the original look.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127080390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}