David Vanderhaeghe, Romain Vergne, Pascal Barla, William V. Baxter
Shading appearance in illustrations, comics and graphic novels is designed to convey illumination, material and surface shape characteristics at once. Moreover, shading may vary depending on different configurations of surface distance, lighting, character expressions, timing of the action, to articulate storytelling or draw attention to a part of an object. In this paper, we present a method that imitates such expressive stylized shading techniques in dynamic 3D scenes, and which offers a simple and flexible means for artists to design and tweak the shading appearance and its dynamic behavior. The key contribution of our approach is to seamlessly vary appearance by using a combination of shading primitives that take into account lighting direction, material characteristics and surface features. We demonstrate their flexibility in a number of scenarios: minimal shading, comics or cartoon rendering, glossy and anisotropic material effects; including a variety of dynamic variations based on orientation, timing or depth. Our prototype implementation combines shading primitives with a layered approach and runs in real-time on the GPU.
{"title":"Dynamic stylized shading primitives","authors":"David Vanderhaeghe, Romain Vergne, Pascal Barla, William V. Baxter","doi":"10.1145/2024676.2024693","DOIUrl":"https://doi.org/10.1145/2024676.2024693","url":null,"abstract":"Shading appearance in illustrations, comics and graphic novels is designed to convey illumination, material and surface shape characteristics at once. Moreover, shading may vary depending on different configurations of surface distance, lighting, character expressions, timing of the action, to articulate storytelling or draw attention to a part of an object. In this paper, we present a method that imitates such expressive stylized shading techniques in dynamic 3D scenes, and which offers a simple and flexible means for artists to design and tweak the shading appearance and its dynamic behavior. The key contribution of our approach is to seamlessly vary appearance by using a combination of shading primitives that take into account lighting direction, material characteristics and surface features. We demonstrate their flexibility in a number of scenarios: minimal shading, comics or cartoon rendering, glossy and anisotropic material effects; including a variety of dynamic variations based on orientation, timing or depth. Our prototype implementation combines shading primitives with a layered approach and runs in real-time on the GPU.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129915305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bert Buchholz, Noura Faraj, Sylvain Paris, E. Eisemann, T. Boubekeur
We describe a method to parameterize lines generated from animated 3D models in the context of animated line drawings. Cartoons and mechanical illustrations are popular subjects of non-photorealistic drawings and are often generated from 3D models. Adding texture to the lines, for instance to depict brush strokes or dashed lines, enables greater expressiveness, e.g. to distinguish between visible and hidden lines. However, dynamic visibility events and the evolving shape of the lines raise issues that have been only partially explored so far. In this paper, we assume that the entire 3D animation is known ahead of time, as is typically the case for feature animations and off-line rendering. At the core of our method is a geometric formulation of the problem as a parameterization of the space-time surface swept by a 2D line during the animation. First, we build this surface by extracting lines in each frame. We demonstrate our approach with silhouette lines. Then, we locate visibility events that would create discontinuities and propagate them through time. They decompose the surface into charts with a disc topology. We parameterize each chart via a least-squares approach that reflects the specific requirements of line drawing. This step results in a texture atlas of the space-time surface which defines the parameterization for each line. We show that by adjusting a few weights in the least-squares energy, the artist can obtain an artifact-free animated motion in a variety of typical non-photorealistic styles such as painterly strokes and technical line drawing.
{"title":"Spatio-temporal analysis for parameterizing animated lines","authors":"Bert Buchholz, Noura Faraj, Sylvain Paris, E. Eisemann, T. Boubekeur","doi":"10.1145/2024676.2024690","DOIUrl":"https://doi.org/10.1145/2024676.2024690","url":null,"abstract":"We describe a method to parameterize lines generated from animated 3D models in the context of animated line drawings. Cartoons and mechanical illustrations are popular subjects of non-photorealistic drawings and are often generated from 3D models. Adding texture to the lines, for instance to depict brush strokes or dashed lines, enables greater expressiveness, e.g. to distinguish between visible and hidden lines. However, dynamic visibility events and the evolving shape of the lines raise issues that have been only partially explored so far. In this paper, we assume that the entire 3D animation is known ahead of time, as is typically the case for feature animations and off-line rendering. At the core of our method is a geometric formulation of the problem as a parameterization of the space-time surface swept by a 2D line during the animation. First, we build this surface by extracting lines in each frame. We demonstrate our approach with silhouette lines. Then, we locate visibility events that would create discontinuities and propagate them through time. They decompose the surface into charts with a disc topology. We parameterize each chart via a least-squares approach that reflects the specific requirements of line drawing. This step results in a texture atlas of the space-time surface which defines the parameterization for each line. We show that by adjusting a few weights in the least-squares energy, the artist can obtain an artifact-free animated motion in a variety of typical non-photorealistic styles such as painterly strokes and technical line drawing.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114191689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In mosaic art, tiles of unique color, material, and shape are arranged on a plane to form patterns and shapes. Although previous research has been carried out on creating static mosaic-like images from non-mosaic input, mosaic animation requires a method to maintain the temporal coherence of tiles. Here we introduce a method that creates mosaic animations from videos by applying a temporally and spatially coherent tile-arrangement technique. We extract coherent feature lines from video input using video segmentation, and arrange tiles based on the feature lines. We then animate tiles along the motion of video, add and delete tiles to preserve the tile density, and smooth tile color via frames.
{"title":"Animation for ancient tile mosaics","authors":"Dongwann Kang, Yong-Jin Ohn, M. Han, K. Yoon","doi":"10.1145/2024676.2024701","DOIUrl":"https://doi.org/10.1145/2024676.2024701","url":null,"abstract":"In mosaic art, tiles of unique color, material, and shape are arranged on a plane to form patterns and shapes. Although previous research has been carried out on creating static mosaic-like images from non-mosaic input, mosaic animation requires a method to maintain the temporal coherence of tiles. Here we introduce a method that creates mosaic animations from videos by applying a temporally and spatially coherent tile-arrangement technique. We extract coherent feature lines from video input using video segmentation, and arrange tiles based on the feature lines. We then animate tiles along the motion of video, add and delete tiles to preserve the tile density, and smooth tile color via frames.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120996322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While many algorithms exist for tracing various contours for illustrating a meshed object, few algorithms organize these contours into region-bounding closed loops. Tracing closed-loop boundaries on a mesh can be problematic due to switchbacks caused by subtle surface variation, and the organization of these regions into a planar map can lead to many small region components due to imprecision and noise. This paper adapts "snaxels," an energy minimizing active contour method designed for robust mesh processing, and repurposes it to generate visual, shadow and shading contours, and a simplified visual-surface planar map, useful for stylized vector art illustration of the mesh. The snaxel active contours can also track contours as the mesh animates, and frame-to-frame correspondences between snaxels lead to a new method to convert the moving contours on a 3-D animated mesh into 2-D SVG curve animations for efficient embedding in Flash, PowerPoint and other dynamic vector art platforms.
{"title":"Snaxels on a plane","authors":"Kevin Karsch, J. Hart","doi":"10.1145/2024676.2024683","DOIUrl":"https://doi.org/10.1145/2024676.2024683","url":null,"abstract":"While many algorithms exist for tracing various contours for illustrating a meshed object, few algorithms organize these contours into region-bounding closed loops. Tracing closed-loop boundaries on a mesh can be problematic due to switchbacks caused by subtle surface variation, and the organization of these regions into a planar map can lead to many small region components due to imprecision and noise. This paper adapts \"snaxels,\" an energy minimizing active contour method designed for robust mesh processing, and repurposes it to generate visual, shadow and shading contours, and a simplified visual-surface planar map, useful for stylized vector art illustration of the mesh. The snaxel active contours can also track contours as the mesh animates, and frame-to-frame correspondences between snaxels lead to a new method to convert the moving contours on a 3-D animated mesh into 2-D SVG curve animations for efficient embedding in Flash, PowerPoint and other dynamic vector art platforms.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127055808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU.
{"title":"Image and video abstraction by multi-scale anisotropic Kuwahara filtering","authors":"J. Kyprianidis","doi":"10.1145/2024676.2024686","DOIUrl":"https://doi.org/10.1145/2024676.2024686","url":null,"abstract":"The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129393409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.
{"title":"Portrait painting using active templates","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/2024676.2024696","DOIUrl":"https://doi.org/10.1145/2024676.2024696","url":null,"abstract":"Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134298611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naila Murray, S. Skaff, L. Marchesotti, F. Perronnin
This paper introduces a novel approach to automatic concept transfer; examples of concepts are "romantic", "earthy", and "luscious". The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
{"title":"Towards automatic concept transfer","authors":"Naila Murray, S. Skaff, L. Marchesotti, F. Perronnin","doi":"10.1145/2024676.2024703","DOIUrl":"https://doi.org/10.1145/2024676.2024703","url":null,"abstract":"This paper introduces a novel approach to automatic concept transfer; examples of concepts are \"romantic\", \"earthy\", and \"luscious\". The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.
{"title":"XDoG: advanced image stylization with eXtended Difference-of-Gaussians","authors":"H. Winnemöller","doi":"10.1145/2024676.2024700","DOIUrl":"https://doi.org/10.1145/2024676.2024700","url":null,"abstract":"Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134559197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernhard Kainz, M. Steinberger, Stefan Hauswiesner, Rostislav Khlebnikov, D. Schmalstieg
This paper presents a new method to control graceful scene degradation in complex ray-based rendering environments. It proposes to constrain the image sampling density with object features, which are known to support the comprehension of the three-dimensional shape. The presented method uses Non-Photorealistic Rendering (NPR) techniques to extract features such as silhouettes, suggestive contours, suggestive highlights, ridges and valleys. To map different feature types to sampling densities, we also present an evaluation of the features impact on the resulting image quality. To reconstruct the image from sparse sampling data, we use linear interpolation on an adaptively aligned fractal pattern. With this technique, we are able to present an algorithm that guarantees a desired minimal frame rate without much loss of image quality. Our scheduling algorithm maximizes the use of each given time slice by rendering features in order of their corresponding importance values until a time constraint is reached. We demonstrate how our method can be used to boost and guarantee the rendering time in complex ray-based environments consisting of geometric as well as volumetric data.
{"title":"Stylization-based ray prioritization for guaranteed frame rates","authors":"Bernhard Kainz, M. Steinberger, Stefan Hauswiesner, Rostislav Khlebnikov, D. Schmalstieg","doi":"10.1145/2024676.2024685","DOIUrl":"https://doi.org/10.1145/2024676.2024685","url":null,"abstract":"This paper presents a new method to control graceful scene degradation in complex ray-based rendering environments. It proposes to constrain the image sampling density with object features, which are known to support the comprehension of the three-dimensional shape. The presented method uses Non-Photorealistic Rendering (NPR) techniques to extract features such as silhouettes, suggestive contours, suggestive highlights, ridges and valleys. To map different feature types to sampling densities, we also present an evaluation of the features impact on the resulting image quality. To reconstruct the image from sparse sampling data, we use linear interpolation on an adaptively aligned fractal pattern. With this technique, we are able to present an algorithm that guarantees a desired minimal frame rate without much loss of image quality. Our scheduling algorithm maximizes the use of each given time slice by rendering features in order of their corresponding importance values until a time constraint is reached. We demonstrate how our method can be used to boost and guarantee the rendering time in complex ray-based environments consisting of geometric as well as volumetric data.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133957019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gioacchino Noris, D. Sýkora, Stelian Coros, B. Whited, Maryann Simmons, A. Sorkine-Hornung, M. Gross, R. Sumner
We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced-noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter, e.g. to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.
{"title":"Temporal noise control for sketchy animation","authors":"Gioacchino Noris, D. Sýkora, Stelian Coros, B. Whited, Maryann Simmons, A. Sorkine-Hornung, M. Gross, R. Sumner","doi":"10.1145/2024676.2024691","DOIUrl":"https://doi.org/10.1145/2024676.2024691","url":null,"abstract":"We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced-noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter, e.g. to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123000524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}