Painterly stylization is the cornerstone of non-photorealistic rendering. Inspired by the versatility of paint as a physical medium, existing methods target intuitive interfaces that mimic physical brushes, providing artists the ability to intuitively place paint strokes in a digital scene. Other work focuses on physical simulation of the interaction between paint and paper or realistic rendering of wet and dry paint. In our work, we leverage the versatility of example-based methods that can generate paint strokes of arbitrary shape and style based on a collection of images acquired from physical media. Such ideas have gained popularity since they do not require cumbersome physical simulation and achieve high fidelity without the need of a specific model or rule set. However, existing methods are limited to the generation of static 2D paintings and cannot be applied in the context of 3D painting and animation where paint strokes change shape and length as the camera viewport moves. Our method targets this shortcoming by generating temporally-coherent example-based paint strokes that accommodate to such length and shape changes. We demonstrate the robustness of our method with a 2D painting application that provides immediate feedback to the user and show how our brush model can be applied to the screen-space rendering of 3D paintings on a variety of examples.
{"title":"Example-based brushes for coherent stylized renderings","authors":"Ming Zheng, Antoine Milliez, M. Gross, R. Sumner","doi":"10.1145/3092919.3092929","DOIUrl":"https://doi.org/10.1145/3092919.3092929","url":null,"abstract":"Painterly stylization is the cornerstone of non-photorealistic rendering. Inspired by the versatility of paint as a physical medium, existing methods target intuitive interfaces that mimic physical brushes, providing artists the ability to intuitively place paint strokes in a digital scene. Other work focuses on physical simulation of the interaction between paint and paper or realistic rendering of wet and dry paint. In our work, we leverage the versatility of example-based methods that can generate paint strokes of arbitrary shape and style based on a collection of images acquired from physical media. Such ideas have gained popularity since they do not require cumbersome physical simulation and achieve high fidelity without the need of a specific model or rule set. However, existing methods are limited to the generation of static 2D paintings and cannot be applied in the context of 3D painting and animation where paint strokes change shape and length as the camera viewport moves. Our method targets this shortcoming by generating temporally-coherent example-based paint strokes that accommodate to such length and shape changes. We demonstrate the robustness of our method with a 2D painting application that provides immediate feedback to the user and show how our brush model can be applied to the screen-space rendering of 3D paintings on a variety of examples.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129634433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuemiao Xu, Linyuan Zhong, M. Xie, Jing Qin, Yilan Chen, Qiang Jin, T. Wong, Guoqiang Han
We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.
{"title":"Texture-aware ASCII art synthesis with proportional fonts","authors":"Xuemiao Xu, Linyuan Zhong, M. Xie, Jing Qin, Yilan Chen, Qiang Jin, T. Wong, Guoqiang Han","doi":"10.2312/EXP.20151191","DOIUrl":"https://doi.org/10.2312/EXP.20151191","url":null,"abstract":"We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129224143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a semi-automated system for converting photometric datasets (RGB images with normals) into geometry-aware non-photorealistic illustrations that obey the common conventions of epigraphy (black-and-white archaeological drawings of inscriptions). We focus on rock inscriptions formed by carving into or pecking out the rock surface: these are characteristically rough with shallow relief, making the problem very challenging for previous line drawing methods. Our system allows the user to easily outline the inscriptions on the rock surface, then segment out the inscriptions and create line drawings and shaded renderings in a variety of styles. We explore both constant-width and tilt-indicating lines, as well as locally shape-revealing shading. Our system produces more understandable illustrations than previous NPR techniques, successfully converting epigraphy from a manual and painstaking process into a user-guided semi-automatic process.
{"title":"Semi-automatic digital epigraphy from images with normals","authors":"Sema Berkiten, Xinyi Fan, S. Rusinkiewicz","doi":"10.2312/EXP.20151182","DOIUrl":"https://doi.org/10.2312/EXP.20151182","url":null,"abstract":"We present a semi-automated system for converting photometric datasets (RGB images with normals) into geometry-aware non-photorealistic illustrations that obey the common conventions of epigraphy (black-and-white archaeological drawings of inscriptions). We focus on rock inscriptions formed by carving into or pecking out the rock surface: these are characteristically rough with shallow relief, making the problem very challenging for previous line drawing methods. Our system allows the user to easily outline the inscriptions on the rock surface, then segment out the inscriptions and create line drawings and shaded renderings in a variety of styles. We explore both constant-width and tilt-indicating lines, as well as locally shape-revealing shading. Our system produces more understandable illustrations than previous NPR techniques, successfully converting epigraphy from a manual and painstaking process into a user-guided semi-automatic process.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning expressive curve styles from example is crucial for interactive or computer-based narrative illustrations. We propose a method for online synthesis of free-hand drawing styles along arbitrary base paths by means of an autoregressive Markov Model. Choice on further curve progression is made while drawing, by sampling from a series of previously learned feature distributions subject to local curvature. The algorithm requires no user-adjustable parameters other than one short example style. It may be used as a custom "random brush" designer in any task that requires rapid placement of a large number of detail-rich shapes that are tedious to create manually.
{"title":"The Markov pen: online synthesis of free-hand drawing styles","authors":"Katrin Lang, M. Alexa","doi":"10.2312/EXP.20151193","DOIUrl":"https://doi.org/10.2312/EXP.20151193","url":null,"abstract":"Learning expressive curve styles from example is crucial for interactive or computer-based narrative illustrations. We propose a method for online synthesis of free-hand drawing styles along arbitrary base paths by means of an autoregressive Markov Model. Choice on further curve progression is made while drawing, by sampling from a series of previously learned feature distributions subject to local curvature. The algorithm requires no user-adjustable parameters other than one short example style. It may be used as a custom \"random brush\" designer in any task that requires rapid placement of a large number of detail-rich shapes that are tedious to create manually.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a localized stylization method that combines object-space and image-space techniques to locally stylize view-dependent lines extracted from 3D models. In the input phase, the user can customize a style and draw strokes by tracing over view-dependent feature lines such as occluding contours and suggestive contours. For each stroke drawn, the system stores its style properties as well as its surface location on the underlying polygonal mesh as a data structure referred as registered stroke. In the rendering phase, a new attraction field leads active contours generated from the registered strokes to match current frame feature lines and maintain the style and path coordinates of strokes in nearby viewpoints. For each registered stroke, a limited surface region referred as influence area is used to improve the line matching accuracy and discard obvious mismatches. The proposed stylization system produces uncluttered line drawings that convey additional information such as material properties or feature sharpness and is evaluated by measuring its usability and performance.
{"title":"Hybrid-space localized stylization method for view-dependent lines extracted from 3D models","authors":"L. Cardona, S. Saito","doi":"10.2312/EXP.20151181","DOIUrl":"https://doi.org/10.2312/EXP.20151181","url":null,"abstract":"We propose a localized stylization method that combines object-space and image-space techniques to locally stylize view-dependent lines extracted from 3D models. In the input phase, the user can customize a style and draw strokes by tracing over view-dependent feature lines such as occluding contours and suggestive contours. For each stroke drawn, the system stores its style properties as well as its surface location on the underlying polygonal mesh as a data structure referred as registered stroke. In the rendering phase, a new attraction field leads active contours generated from the registered strokes to match current frame feature lines and maintain the style and path coordinates of strokes in nearby viewpoints. For each registered stroke, a limited surface region referred as influence area is used to improve the line matching accuracy and discard obvious mismatches. The proposed stylization system produces uncluttered line drawings that convey additional information such as material properties or feature sharpness and is evaluated by measuring its usability and performance.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"36 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132692945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antoine Milliez, Gioacchino Noris, Ilya Baran, Stelian Coros, Marie-Paule Cani, Maurizio Nitti, A. Marra, M. Gross, R. Sumner
Our work on "motion brushes" provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface. Because motion brushes can encompass all the richness of detail and movement offered by animation software, they accommodate complex, varied effects that are not easily created by other means. To support reuse and provide an effective means for managing complexity, we propose a hierarchical representation that allows simple brushes to be combined into more complex ones. Our system provides stroke-based control over motion-brush parameters, including tools to effectively manage the temporal nature of the motion brush instances. We demonstrate the flexibility and richness of our system with motion brushes for splashing rain, footsteps appearing in the snow, and stylized visual effects.
{"title":"Hierarchical motion brushes for animation instancing","authors":"Antoine Milliez, Gioacchino Noris, Ilya Baran, Stelian Coros, Marie-Paule Cani, Maurizio Nitti, A. Marra, M. Gross, R. Sumner","doi":"10.1145/2630397.2630402","DOIUrl":"https://doi.org/10.1145/2630397.2630402","url":null,"abstract":"Our work on \"motion brushes\" provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface. Because motion brushes can encompass all the richness of detail and movement offered by animation software, they accommodate complex, varied effects that are not easily created by other means. To support reuse and provide an effective means for managing complexity, we propose a hierarchical representation that allows simple brushes to be combined into more complex ones. Our system provides stroke-based control over motion-brush parameters, including tools to effectively manage the temporal nature of the motion brush instances. We demonstrate the flexibility and richness of our system with motion brushes for splashing rain, footsteps appearing in the snow, and stylized visual effects.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126933437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new approach for stippling by recursively dividing a grayscale image into rectangles with equal amount of ink, then we use the resulting structure to generate novel line-based halftoning techniques. We present four different rendering styles which share the same underlying structure, two of which bear some similarity to Bosch-Kaplan's TSP Art and Inoue-Urahama's MST Halftoning. The technique we present is fast enough for real time interaction, and at least one of the four rendering styles is well-suited for maze construction.
{"title":"Modular line-based halftoning via recursive division","authors":"Abdalla G. M. Ahmed","doi":"10.1145/2630397.2630403","DOIUrl":"https://doi.org/10.1145/2630397.2630403","url":null,"abstract":"We present a new approach for stippling by recursively dividing a grayscale image into rectangles with equal amount of ink, then we use the resulting structure to generate novel line-based halftoning techniques. We present four different rendering styles which share the same underlying structure, two of which bear some similarity to Bosch-Kaplan's TSP Art and Inoue-Urahama's MST Halftoning. The technique we present is fast enough for real time interaction, and at least one of the four rendering styles is well-suited for maze construction.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132056909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although vector graphics offer a number of benefits, conventional vector painting programs offer only limited support for the traditional painting metaphor. We propose a new algorithm that translates a user's mouse motion into a triangle mesh representation. This triangle mesh can then be composited onto a canvas containing an existing mesh representation of earlier strokes. This representation allows the algorithm to render solid colors and linear gradients. It also enables painting at any resolution. This paradigm allows artists to create complex, multi-scale drawings with gradients and sharp features while avoiding pixel sampling artifacts.
{"title":"Painting with triangles","authors":"M. D. Benjamin, S. DiVerdi, Adam Finkelstein","doi":"10.1145/2630397.2630399","DOIUrl":"https://doi.org/10.1145/2630397.2630399","url":null,"abstract":"Although vector graphics offer a number of benefits, conventional vector painting programs offer only limited support for the traditional painting metaphor. We propose a new algorithm that translates a user's mouse motion into a triangle mesh representation. This triangle mesh can then be composited onto a canvas containing an existing mesh representation of earlier strokes. This representation allows the algorithm to render solid colors and linear gradients. It also enables painting at any resolution. This paradigm allows artists to create complex, multi-scale drawings with gradients and sharp features while avoiding pixel sampling artifacts.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The chromostereopsis phenomenom leads to a differing depth perception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth®glasses, which induce chromatic aberrations in one eye by refracting light of different wavelengths differently, hereby offsetting the projected position slightly in one eye. Although, it might seem natural to map depth linearly to hue, which was also the basis of previous solutions, we demonstrate that such a mapping reduces the stereoscopic effect when using standard trichromatic displays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.
{"title":"ChromoStereoscopic rendering for trichromatic displays","authors":"Leïla Schemali, E. Eisemann","doi":"10.1145/2630397.2630398","DOIUrl":"https://doi.org/10.1145/2630397.2630398","url":null,"abstract":"The chromostereopsis phenomenom leads to a differing depth perception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth®glasses, which induce chromatic aberrations in one eye by refracting light of different wavelengths differently, hereby offsetting the projected position slightly in one eye. Although, it might seem natural to map depth linearly to hue, which was also the basis of previous solutions, we demonstrate that such a mapping reduces the stereoscopic effect when using standard trichromatic displays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124352426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheryl Lau, Yuliy Schwartzburg, Appu Shaji, Zahra Sadeghipoor, S. Süsstrunk
Designing aesthetically pleasing and challenging jigsaw puzzles is considered an art that requires considerable skill and expertise. We propose a tool that allows novice users to create customized jigsaw puzzles based on the image content and a user-defined curve. A popular design choice among puzzle makers, called color line cutting, is to cut the puzzle along the main contours in an image, making the puzzle both aesthetically interesting and challenging to solve. At the same time, the puzzle maker has to make sure that puzzle pieces interlock so that they do not disassemble easily. Our method automatically optimizes for puzzle cuts that follow the main contours in the image and match the user-defined curve. We handle the tradeoff between color line cutting and interlocking, and we introduce a linear formulation for the interlocking constraint. We propose a novel method for eliminating self-intersections and ensuring a minimum width in our output curves. Our method satisfies these necessary fabrication constraints in order to make valid puzzles that can be easily realized with present-day laser cutters.
{"title":"Creating personalized jigsaw puzzles","authors":"Cheryl Lau, Yuliy Schwartzburg, Appu Shaji, Zahra Sadeghipoor, S. Süsstrunk","doi":"10.1145/2630397.2630405","DOIUrl":"https://doi.org/10.1145/2630397.2630405","url":null,"abstract":"Designing aesthetically pleasing and challenging jigsaw puzzles is considered an art that requires considerable skill and expertise. We propose a tool that allows novice users to create customized jigsaw puzzles based on the image content and a user-defined curve. A popular design choice among puzzle makers, called color line cutting, is to cut the puzzle along the main contours in an image, making the puzzle both aesthetically interesting and challenging to solve. At the same time, the puzzle maker has to make sure that puzzle pieces interlock so that they do not disassemble easily.\u0000 Our method automatically optimizes for puzzle cuts that follow the main contours in the image and match the user-defined curve. We handle the tradeoff between color line cutting and interlocking, and we introduce a linear formulation for the interlocking constraint. We propose a novel method for eliminating self-intersections and ensuring a minimum width in our output curves. Our method satisfies these necessary fabrication constraints in order to make valid puzzles that can be easily realized with present-day laser cutters.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1092 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116041172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}