We propose a new system to produce pencil drawing from natural images. The results contain various natural strokes and patterns, and are structurally representative. They are accomplished by novelly combining the tone and stroke structures, which complement each other in generating visually constrained results. Prior knowledge on pencil drawing is also incorporated, making the two basic functions robust against noise, strong texture, and significant illumination variation. In light of edge, shadow, and shading information conveyance, our pencil drawing system establishes a style that artists use to understand visual data and draw them. Meanwhile, it lets the results contain rich and well-ordered lines to vividly express the original scene.
{"title":"Combining sketch and tone for pencil drawing production","authors":"Cewu Lu, Li Xu, Jiaya Jia","doi":"10.5555/2330147.2330161","DOIUrl":"https://doi.org/10.5555/2330147.2330161","url":null,"abstract":"We propose a new system to produce pencil drawing from natural images. The results contain various natural strokes and patterns, and are structurally representative. They are accomplished by novelly combining the tone and stroke structures, which complement each other in generating visually constrained results. Prior knowledge on pencil drawing is also incorporated, making the two basic functions robust against noise, strong texture, and significant illumination variation. In light of edge, shadow, and shading information conveyance, our pencil drawing system establishes a style that artists use to understand visual data and draw them. Meanwhile, it lets the results contain rich and well-ordered lines to vividly express the original scene.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130522765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-04DOI: 10.2312/PE/NPAR/NPAR12/037-046
P. Bénard, Jingwan Lu, Forrester Cole, Adam Finkelstein, J. Thollot
This paper presents a method for creating coherently animated line drawings that include strong abstraction and stylization effects. These effects are achieved with active strokes: 2D contours that approximate and track the lines of an animated 3D scene. Active strokes perform two functions: they connect and smooth unorganized line samples, and they carry coherent parameterization to support stylized rendering. Line samples are approximated and tracked using active contours ("snakes") that automatically update their arrangment and topology to match the animation. Parameterization is maintained by brush paths that follow the snakes but are independent, permitting substantial shape abstraction without compromising fidelity in tracking. This approach renders complex models in a wide range of styles at interactive rates, making it suitable for applications like games and interactive illustrations.
{"title":"Active strokes: coherent line stylization for animated 3D models","authors":"P. Bénard, Jingwan Lu, Forrester Cole, Adam Finkelstein, J. Thollot","doi":"10.2312/PE/NPAR/NPAR12/037-046","DOIUrl":"https://doi.org/10.2312/PE/NPAR/NPAR12/037-046","url":null,"abstract":"This paper presents a method for creating coherently animated line drawings that include strong abstraction and stylization effects. These effects are achieved with active strokes: 2D contours that approximate and track the lines of an animated 3D scene. Active strokes perform two functions: they connect and smooth unorganized line samples, and they carry coherent parameterization to support stylized rendering. Line samples are approximated and tracked using active contours (\"snakes\") that automatically update their arrangment and topology to match the animation. Parameterization is maintained by brush paths that follow the snakes but are independent, permitting substantial shape abstraction without compromising fidelity in tracking. This approach renders complex models in a wide range of styles at interactive rates, making it suitable for applications like games and interactive illustrations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132432968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-04DOI: 10.2312/PE/NPAR/NPAR12/001-010
S. Colton, Blanca Pérez Ferrer
We report on an exhibition centered around a dialogue between a Computational Creativity researcher presenting artwork generated by a computer program and a classically trained artist taking inspiration from the computational processes. The main purpose of the exhibition was to place software-generated art (where the program takes on some aesthetic and generative responsibilities, rather than acting as a mere tool) in both an art-production context and an art-historical context, by exploring the themes of creative responsibility and the loss of aura surrounding a work of art. A secondary purpose was to highlight the fact that computer generated art can be representational without relying on digital photographs as inputs. We describe certain technical hurdles we overcame in the production of the exhibition and the feedback we gained, in addition to elaborating on how the event and the project as a whole fits into an art-historical context. We conclude with brief details of another exhibition involving art generated by the same software system, where the notion of progression was explored; by describing a planned exhibition, where autonomy and independence in the system will be highlighted; and by providing a partial roadmap for progress towards autonomously creative software in the visual arts.
{"title":"No photos harmed/growing paths from seed: an exhibition","authors":"S. Colton, Blanca Pérez Ferrer","doi":"10.2312/PE/NPAR/NPAR12/001-010","DOIUrl":"https://doi.org/10.2312/PE/NPAR/NPAR12/001-010","url":null,"abstract":"We report on an exhibition centered around a dialogue between a Computational Creativity researcher presenting artwork generated by a computer program and a classically trained artist taking inspiration from the computational processes. The main purpose of the exhibition was to place software-generated art (where the program takes on some aesthetic and generative responsibilities, rather than acting as a mere tool) in both an art-production context and an art-historical context, by exploring the themes of creative responsibility and the loss of aura surrounding a work of art. A secondary purpose was to highlight the fact that computer generated art can be representational without relying on digital photographs as inputs. We describe certain technical hurdles we overcame in the production of the exhibition and the feedback we gained, in addition to elaborating on how the event and the project as a whole fits into an art-historical context. We conclude with brief details of another exhibition involving art generated by the same software system, where the notion of progression was explored; by describing a planned exhibition, where autonomy and independence in the system will be highlighted; and by providing a partial roadmap for progress towards autonomously creative software in the visual arts.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130307640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-04DOI: 10.2312/PE/NPAR/NPAR12/075-082
D. Mould
Image abstraction traditionally eliminates texture, but doing so ignores the more elegant alternative of texture indication, e.g., suggesting the presence of texture through irregular silhouettes and locally chosen details. We propose a variant of geodesic image filtering which preserves the locally strongest edges, leading to preservation of both strong edges and weak edges depending on the surrounding context. Our contribution is to introduce cumulative range geodesic filtering, where the distance in the image plane is lengthened proportional to the color distance. We apply the new filtering scheme to abstraction applications and demonstrate that it has powerful structure-preserving capabilities, especially regarding preservation and indication of textures.
{"title":"Texture-preserving abstraction","authors":"D. Mould","doi":"10.2312/PE/NPAR/NPAR12/075-082","DOIUrl":"https://doi.org/10.2312/PE/NPAR/NPAR12/075-082","url":null,"abstract":"Image abstraction traditionally eliminates texture, but doing so ignores the more elegant alternative of texture indication, e.g., suggesting the presence of texture through irregular silhouettes and locally chosen details. We propose a variant of geodesic image filtering which preserves the locally strongest edges, leading to preservation of both strong edges and weak edges depending on the surrounding context.\u0000 Our contribution is to introduce cumulative range geodesic filtering, where the distance in the image plane is lengthened proportional to the color distance. We apply the new filtering scheme to abstraction applications and demonstrate that it has powerful structure-preserving capabilities, especially regarding preservation and indication of textures.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126550896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Sýkora, M. Ben-Chen, Martin Čadík, B. Whited, Maryann Simmons
We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of postprocessing operations that can be applied to simulate 3D-like effects entirely in the 2D domain.
{"title":"TexToons: practical texture mapping for hand-drawn cartoon animations","authors":"D. Sýkora, M. Ben-Chen, Martin Čadík, B. Whited, Maryann Simmons","doi":"10.1145/2024676.2024689","DOIUrl":"https://doi.org/10.1145/2024676.2024689","url":null,"abstract":"We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of postprocessing operations that can be applied to simulate 3D-like effects entirely in the 2D domain.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128748943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose to tessellate a region by growing curves. We use a particle system, which flexibly provides good control over the final effects by variations of the initial placement, the placement order, curve direction, and curve properties. We also propose an automatic image-based mosaic method which has good texture indication, using a smoothed vector field to guide particle movement. The final irregular tessellation simulates stained glass where the elongated curved tiles suggest the content of highly textured areas. We give some additional applications, some of which resemble naturally occurring irregular patterns such as cracks and scales. We also notice that stacking a set of curves in a structured way can produce the illusion of a 3D shape.
{"title":"Artistic tessellations by growing curves","authors":"Hua Li, D. Mould","doi":"10.1145/2024676.2024697","DOIUrl":"https://doi.org/10.1145/2024676.2024697","url":null,"abstract":"In this paper we propose to tessellate a region by growing curves. We use a particle system, which flexibly provides good control over the final effects by variations of the initial placement, the placement order, curve direction, and curve properties. We also propose an automatic image-based mosaic method which has good texture indication, using a smoothed vector field to guide particle movement. The final irregular tessellation simulates stained glass where the elongated curved tiles suggest the content of highly textured areas. We give some additional applications, some of which resemble naturally occurring irregular patterns such as cracks and scales. We also notice that stacking a set of curves in a structured way can produce the illusion of a 3D shape.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130185522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-photorealistic rendering (NPR) algorithms are used to produce stylized images, and have been evaluated on the aesthetic qualities of the resulting images. NPR-produced images have been used for aesthetic and practical reasons in media intended to produce an emotional reaction in a consumer (e.g., computer games, films, advertisements, and websites); however, it is not understood how the use of these algorithms affects the emotion portrayed in an image. We conducted a study of subjective emotional response to five common NPR approaches, two blurring techniques, and the original image with 42 participants, and found that the NPR algorithms dampened participants' emotional responses in terms of arousal (activation) and valence (pleasure).
{"title":"Evaluation of emotional response to non-photorealistic images","authors":"R. Mandryk, D. Mould, Hua Li","doi":"10.1145/2024676.2024678","DOIUrl":"https://doi.org/10.1145/2024676.2024678","url":null,"abstract":"Non-photorealistic rendering (NPR) algorithms are used to produce stylized images, and have been evaluated on the aesthetic qualities of the resulting images. NPR-produced images have been used for aesthetic and practical reasons in media intended to produce an emotional reaction in a consumer (e.g., computer games, films, advertisements, and websites); however, it is not understood how the use of these algorithms affects the emotion portrayed in an image. We conducted a study of subjective emotional response to five common NPR approaches, two blurring techniques, and the original image with 42 participants, and found that the NPR algorithms dampened participants' emotional responses in terms of arousal (activation) and valence (pleasure).","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131532548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiang Tong, Song-Hai Zhang, Shimin Hu, Ralph Robert Martin
A hidden image is a form of artistic expression in which one or more secondary objects (or scenes) are hidden within a primary image. Features of the primary image, especially its edges and texture, are used to portray a secondary object. People can recognize both the primary and secondary intent in such pictures, although the time taken to do so depends on the prior experience of the viewer and the strength of the clues. Here, we present a system for creating such images. It relies on the ability of human perception to recognize an object, e.g. a human face, from incomplete edge information within its interior, rather than its outline. Our system detects edges of the object to be hidden, and then finds a place where it can be embedded within the scene, together with a suitable transformation for doing so, by optimizing an energy based on edge differences. Embedding is performed using a modified Poisson blending approach, which strengthens matched edges of the host image using edges of the object being embedded. We show various hidden images generated by our system.
{"title":"Hidden images","authors":"Qiang Tong, Song-Hai Zhang, Shimin Hu, Ralph Robert Martin","doi":"10.1145/2024676.2024681","DOIUrl":"https://doi.org/10.1145/2024676.2024681","url":null,"abstract":"A hidden image is a form of artistic expression in which one or more secondary objects (or scenes) are hidden within a primary image. Features of the primary image, especially its edges and texture, are used to portray a secondary object. People can recognize both the primary and secondary intent in such pictures, although the time taken to do so depends on the prior experience of the viewer and the strength of the clues. Here, we present a system for creating such images. It relies on the ability of human perception to recognize an object, e.g. a human face, from incomplete edge information within its interior, rather than its outline. Our system detects edges of the object to be hidden, and then finds a place where it can be embedded within the scene, together with a suitable transformation for doing so, by optimizing an energy based on edge differences. Embedding is performed using a modified Poisson blending approach, which strengthens matched edges of the host image using edges of the object being embedded. We show various hidden images generated by our system.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127942041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an unsupervised system which takes digital photographs as input, and generates simplified, stylized vector data as output. The three component parts of our system are image-space stylization, edge tracing, and edge-based image reconstruction. The design of each of these components is specialized, relative to their state of the art equivalents, in order to improve their effectiveness when used in such a combined stylization/vectorization pipeline. We demonstrate that the vector data generated by our system is often both an effective visual simplification of the input photographs, and an effective simplification in the sense of memory efficiency, as judged relative to state of the art lossy image compression formats.
{"title":"Image simplification and vectorization","authors":"S. Olsen, B. Gooch","doi":"10.1145/2024676.2024687","DOIUrl":"https://doi.org/10.1145/2024676.2024687","url":null,"abstract":"We present an unsupervised system which takes digital photographs as input, and generates simplified, stylized vector data as output. The three component parts of our system are image-space stylization, edge tracing, and edge-based image reconstruction. The design of each of these components is specialized, relative to their state of the art equivalents, in order to improve their effectiveness when used in such a combined stylization/vectorization pipeline. We demonstrate that the vector data generated by our system is often both an effective visual simplification of the input photographs, and an effective simplification in the sense of memory efficiency, as judged relative to state of the art lossy image compression formats.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121704467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Two-dimensional geometric texture synthesis is the geometric analogue of raster-based texture synthesis. An absence of conventional evaluation procedures in recent synthesis attempts demands an inquiry into the visual significance of synthesized results. In this paper, we report on two psychophysical experiments that explore how people understand notions of similarity in geometric textures. We present perceptual metrics and human texture generation features that are crucial for future researchers when developing and assessing the success of their algorithms.
{"title":"Towards ground truth in geometric textures","authors":"Zainab Almeraj, C. Kaplan, P. Asente, E. Lank","doi":"10.1145/2024676.2024679","DOIUrl":"https://doi.org/10.1145/2024676.2024679","url":null,"abstract":"Two-dimensional geometric texture synthesis is the geometric analogue of raster-based texture synthesis. An absence of conventional evaluation procedures in recent synthesis attempts demands an inquiry into the visual significance of synthesized results. In this paper, we report on two psychophysical experiments that explore how people understand notions of similarity in geometric textures. We present perceptual metrics and human texture generation features that are crucial for future researchers when developing and assessing the success of their algorithms.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131692940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}