Soft shadows are effective depth and shape cues. However, traditional shadowing algorithms decrease the luminance in shadow areas. The features in shadow become dark and thus shadowing causes information hiding. For this reason, in shadowed areas, medical illustrators decrease the luminance less and compensate the lower luminance range by adding color, i.e., by introducing a chromatic component. This paper presents a novel technique which enables an interactive setup of an illustrative shadow representation for preventing overdarkening of important structures. We introduce a scalar attribute for every voxel denoted as shadowiness and propose a shadow transfer function that maps the shadowiness to a color and a blend factor. Typically, the blend factor increases linearly with the shadowiness. We then let the original object color blend with the shadow color according to the blend factor. We suggest a specific shadow transfer function, designed together with a medical illustrator which shifts the shadow color towards blue. This shadow transfer function is quantitatively evaluated with respect to relative depth and surface perception.
{"title":"Chromatic shadows for improved perception","authors":"Veronika Soltészová, Daniel Patel, I. Viola","doi":"10.1145/2024676.2024694","DOIUrl":"https://doi.org/10.1145/2024676.2024694","url":null,"abstract":"Soft shadows are effective depth and shape cues. However, traditional shadowing algorithms decrease the luminance in shadow areas. The features in shadow become dark and thus shadowing causes information hiding. For this reason, in shadowed areas, medical illustrators decrease the luminance less and compensate the lower luminance range by adding color, i.e., by introducing a chromatic component. This paper presents a novel technique which enables an interactive setup of an illustrative shadow representation for preventing overdarkening of important structures. We introduce a scalar attribute for every voxel denoted as shadowiness and propose a shadow transfer function that maps the shadowiness to a color and a blend factor. Typically, the blend factor increases linearly with the shadowiness. We then let the original object color blend with the shadow color according to the blend factor. We suggest a specific shadow transfer function, designed together with a medical illustrator which shifts the shadow color towards blue. This shadow transfer function is quantitatively evaluated with respect to relative depth and surface perception.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127695306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the stroke placement problem in painterly rendering, and present a solution named stroke processes, which enables intuitive and interactive customization of painting styles by mapping perceptual characteristics to rendering parameters. Using our method, a user can adjust styles (e.g., Fig.1) easily by controlling these intuitive parameters. Our model and algorithm are capable of reflecting various styles in a single framework, which includes point processes and stroke neighborhood graphs to model the spatial layout of brush strokes, and stochastic reaction-diffusion processes to compute the levels and contrasts of their attributes to match desired statistics. We demonstrate the rendering quality and flexibility of this method with extensive experiments.
{"title":"Customizing painterly rendering styles using stroke processes","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/2024676.2024698","DOIUrl":"https://doi.org/10.1145/2024676.2024698","url":null,"abstract":"In this paper, we study the stroke placement problem in painterly rendering, and present a solution named stroke processes, which enables intuitive and interactive customization of painting styles by mapping perceptual characteristics to rendering parameters. Using our method, a user can adjust styles (e.g., Fig.1) easily by controlling these intuitive parameters. Our model and algorithm are capable of reflecting various styles in a single framework, which includes point processes and stroke neighborhood graphs to model the spatial layout of brush strokes, and stochastic reaction-diffusion processes to compute the levels and contrasts of their attributes to match desired statistics. We demonstrate the rendering quality and flexibility of this method with extensive experiments.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115239370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang Lin, K. Zeng, Han Lv, Yizhou Wang, Ying-Qing Xu, Song-Chun Zhu
We present an interactive system that stylizes an input video into a painterly animation. The system consists of two phases. The first is an Video Parsing phase that extracts and labels semantic objects with different material properties (skin, hair, cloth, and so on) in the video, and then establishes robust correspondence between frames for discriminative image features inside each object. The second Painterly Rendering phase performs the stylization based on the video semantics and feature correspondence. Compared to the previous work, the proposed method advances painterly animation in three aspects: Firstly, we render artistic painterly styles using a rich set of example-based brush strokes. These strokes, placed in multiple layers and passes, are automatically selected according to the video semantics. Secondly, we warp brush strokes according to global object deformations, so that the strokes appear to be tightly attached to the object surfaces. Thirdly, we propose a series of novel teniques to reduce the scintillation effects. Results applying our system to several video clips show that it produces expressive oil painting animations.
{"title":"Painterly animation using video semantics and feature correspondence","authors":"Liang Lin, K. Zeng, Han Lv, Yizhou Wang, Ying-Qing Xu, Song-Chun Zhu","doi":"10.1145/1809939.1809948","DOIUrl":"https://doi.org/10.1145/1809939.1809948","url":null,"abstract":"We present an interactive system that stylizes an input video into a painterly animation. The system consists of two phases. The first is an Video Parsing phase that extracts and labels semantic objects with different material properties (skin, hair, cloth, and so on) in the video, and then establishes robust correspondence between frames for discriminative image features inside each object. The second Painterly Rendering phase performs the stylization based on the video semantics and feature correspondence. Compared to the previous work, the proposed method advances painterly animation in three aspects: Firstly, we render artistic painterly styles using a rich set of example-based brush strokes. These strokes, placed in multiple layers and passes, are automatically selected according to the video semantics. Secondly, we warp brush strokes according to global object deformations, so that the strokes appear to be tightly attached to the object surfaces. Thirdly, we propose a series of novel teniques to reduce the scintillation effects. Results applying our system to several video clips show that it produces expressive oil painting animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hochang Lee, Sanghyun Seo, Seung-Tack Ryoo, K. Yoon
A texture transfer algorithm modifies the target image replacing the high frequency information with the example source image. Previous texture transfer techniques normally use such factors as color distance and standard deviation for selecting the best texture from the candidate sets. These factors are useful for expressing a texture effect of the example source in the target image, but are less than optimal for considering the object shape of the target image. In this paper, we propose a novel texture transfer algorithm to express the directional effect based on the flow of the target image. For this, we use a directional factor that considers the gradient direction of the target image. We add an additional energy term that respects the image gradient to the previous fast texture transfer algorithm. Additionally, we propose a method for estimating the directional factor weight value from the target image. We have tested our algorithm with various target images. Our algorithm can express a result image with the feature of the example source texture and the flow of the target image.
{"title":"Directional texture transfer","authors":"Hochang Lee, Sanghyun Seo, Seung-Tack Ryoo, K. Yoon","doi":"10.1145/1809939.1809945","DOIUrl":"https://doi.org/10.1145/1809939.1809945","url":null,"abstract":"A texture transfer algorithm modifies the target image replacing the high frequency information with the example source image. Previous texture transfer techniques normally use such factors as color distance and standard deviation for selecting the best texture from the candidate sets. These factors are useful for expressing a texture effect of the example source in the target image, but are less than optimal for considering the object shape of the target image.\u0000 In this paper, we propose a novel texture transfer algorithm to express the directional effect based on the flow of the target image. For this, we use a directional factor that considers the gradient direction of the target image. We add an additional energy term that respects the image gradient to the previous fast texture transfer algorithm. Additionally, we propose a method for estimating the directional factor weight value from the target image. We have tested our algorithm with various target images. Our algorithm can express a result image with the feature of the example source texture and the flow of the target image.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125284910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an interactive abstract painting system named Sisley. Sisley works upon the psychological principle [Berlyne 1971] that abstract arts are often characterized by their greater perceptual ambiguities than photographs, which tend to invoke moderate mental efforts of the audience for interpretation, accompanied with subtle aesthetic pleasures. Given an input photograph, Sisley decomposes it into a hierarchy/tree of its constituent image components (e.g., regions, objects of different categories) with interactive guidance from the user, then automatically generates corresponding abstract painting images, with increased ambiguities of both the scene and individual objects at desired levels. Sisley consists of three major working parts: (1) an interactive image parser executing the tasks of segmentation, labeling, and hierarchical organization, (2) a painterly rendering engine with abstract operators for transferring the image appearance, and (3) a numerical ambiguity computation and control module of servomechanism. With the help of Sisley, even an amateur user can create abstract paintings from photographs easily in minutes. We have evaluated the rendering results of Sisley using human experiments, and verified that they have similar abstract effects to original abstract paintings by artists.
{"title":"Sisley the abstract painter","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/1809939.1809951","DOIUrl":"https://doi.org/10.1145/1809939.1809951","url":null,"abstract":"We present an interactive abstract painting system named Sisley. Sisley works upon the psychological principle [Berlyne 1971] that abstract arts are often characterized by their greater perceptual ambiguities than photographs, which tend to invoke moderate mental efforts of the audience for interpretation, accompanied with subtle aesthetic pleasures. Given an input photograph, Sisley decomposes it into a hierarchy/tree of its constituent image components (e.g., regions, objects of different categories) with interactive guidance from the user, then automatically generates corresponding abstract painting images, with increased ambiguities of both the scene and individual objects at desired levels. Sisley consists of three major working parts: (1) an interactive image parser executing the tasks of segmentation, labeling, and hierarchical organization, (2) a painterly rendering engine with abstract operators for transferring the image appearance, and (3) a numerical ambiguity computation and control module of servomechanism. With the help of Sisley, even an amateur user can create abstract paintings from photographs easily in minutes. We have evaluated the rendering results of Sisley using human experiments, and verified that they have similar abstract effects to original abstract paintings by artists.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122150436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a system to automatically generate compact explosion diagrams. Inspired by handmade illustrations, our approach reduces the complexity of an explosion diagram by rendering an exploded view only for a subset of the assemblies of an object. However, the exploded views are chosen so that they allow inference of the remaining unexploded assemblies of the entire 3D model. In particular, our approach demonstrates the assembly of a set of identical groups of parts, by presenting an exploded view only for a single representative. In order to identify the representatives, our system automatically searches for recurring subassemblies. It selects representatives depending on a quality evaluation of their potential exploded view. Our system takes into account visibility information of both the exploded view of a potential representative as well as visibility information of the remaining unexploded assemblies. This allows rendering a balanced compact explosion diagram, consisting of a clear presentation of the exploded representatives as well as the unexploded remaining assemblies. Since representatives may interfere with one another, our system furthermore optimizes combinations of representatives. Throughout this paper we show a number of examples, which have all been rendered from unmodified 3D CAD models.
{"title":"Compact explosion diagrams","authors":"Markus Tatzgern, Denis Kalkofen, D. Schmalstieg","doi":"10.1145/1809939.1809942","DOIUrl":"https://doi.org/10.1145/1809939.1809942","url":null,"abstract":"This paper presents a system to automatically generate compact explosion diagrams. Inspired by handmade illustrations, our approach reduces the complexity of an explosion diagram by rendering an exploded view only for a subset of the assemblies of an object. However, the exploded views are chosen so that they allow inference of the remaining unexploded assemblies of the entire 3D model. In particular, our approach demonstrates the assembly of a set of identical groups of parts, by presenting an exploded view only for a single representative. In order to identify the representatives, our system automatically searches for recurring subassemblies. It selects representatives depending on a quality evaluation of their potential exploded view. Our system takes into account visibility information of both the exploded view of a potential representative as well as visibility information of the remaining unexploded assemblies. This allows rendering a balanced compact explosion diagram, consisting of a clear presentation of the exploded representatives as well as the unexploded remaining assemblies. Since representatives may interfere with one another, our system furthermore optimizes combinations of representatives. Throughout this paper we show a number of examples, which have all been rendered from unmodified 3D CAD models.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, Adam Finkelstein
Stylized line rendering for animation has traditionally traded-off between two undesirable artifacts: stroke texture sliding and stroke texture stretching. This paper proposes a new stroke texture representation, the self-similar line artmap (SLAM), which avoids both these artifacts. SLAM textures provide continuous, infinite zoom while maintaining approximately constant appearance in screen-space, and can be produced automatically from a single exemplar. SLAMs can be used as drop-in replacements for conventional stroke textures in 2D illustration and animation. Furthermore, SLAMs enable a new, simple approach to temporally coherent rendering of 3D paths that is suitable for interactive applications. We demonstrate results for 2D and 3D animations.
{"title":"Self-similar texture for coherent line stylization","authors":"Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, Adam Finkelstein","doi":"10.1145/1809939.1809950","DOIUrl":"https://doi.org/10.1145/1809939.1809950","url":null,"abstract":"Stylized line rendering for animation has traditionally traded-off between two undesirable artifacts: stroke texture sliding and stroke texture stretching. This paper proposes a new stroke texture representation, the self-similar line artmap (SLAM), which avoids both these artifacts. SLAM textures provide continuous, infinite zoom while maintaining approximately constant appearance in screen-space, and can be produced automatically from a single exemplar. SLAMs can be used as drop-in replacements for conventional stroke textures in 2D illustration and animation. Furthermore, SLAMs enable a new, simple approach to temporally coherent rendering of 3D paths that is suitable for interactive applications. We demonstrate results for 2D and 3D animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130433198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jorge López-Moreno, Jorge Jimenez, Sunil Hadap, E. Reinhard, K. Anjyo, D. Gutierrez
Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to creatively relight images for the purpose of generating non-photorealistic renditions that would be difficult to achieve with existing methods. Our realtime implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with four different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study.
{"title":"Stylized depiction of images based on depth perception","authors":"Jorge López-Moreno, Jorge Jimenez, Sunil Hadap, E. Reinhard, K. Anjyo, D. Gutierrez","doi":"10.1145/1809939.1809952","DOIUrl":"https://doi.org/10.1145/1809939.1809952","url":null,"abstract":"Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to creatively relight images for the purpose of generating non-photorealistic renditions that would be difficult to achieve with existing methods. Our realtime implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with four different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domingo Martín, G. Arroyo, M. V. Luzón, Tobias Isenberg
We present an example-based approach to synthesizing stipple illustrations for static 2D images that produces scale-dependent results appropriate for an intended spatial output size and resolution. We show how treating stippling as a grayscale process allows us to both produce on-screen output and to achieve stipple merging at medium tonal ranges. At the same time we can also produce images with high spatial and low color resolution for print reproduction. In addition, we discuss how to incorporate high-level illustration considerations into the stippling process based on discussions with and observations of a stipple artist. The implementation of the technique is based on a fast method for distributing dots using halftoning and can be used to create stipple images interactively.
{"title":"Example-based stippling using a scale-dependent grayscale process","authors":"Domingo Martín, G. Arroyo, M. V. Luzón, Tobias Isenberg","doi":"10.1145/1809939.1809946","DOIUrl":"https://doi.org/10.1145/1809939.1809946","url":null,"abstract":"We present an example-based approach to synthesizing stipple illustrations for static 2D images that produces scale-dependent results appropriate for an intended spatial output size and resolution. We show how treating stippling as a grayscale process allows us to both produce on-screen output and to achieve stipple merging at medium tonal ranges. At the same time we can also produce images with high spatial and low color resolution for print reproduction. In addition, we discuss how to incorporate high-level illustration considerations into the stippling process based on discussions with and observations of a stipple artist. The implementation of the technique is based on a fast method for distributing dots using halftoning and can be used to create stipple images interactively.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124083176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Wang, J. Collomosse, David Slatter, P. Cheatle, D. Greig
Falling hardware costs have prompted an explosion in casual video capture by domestic users. Yet, this video is infrequently accessed post-capture and often lies dormant on users' PCs. We present a system to breathe life into home video repositories, drawing upon artistic stylization to create a "Digital Ambient Display" that automatically selects, stylizes and transitions between videos in a semantically meaningful sequence. We present a novel algorithm based on multi-label graph cut for segmenting video into temporally coherent region maps. These maps are used to both stylize video into cartoons and paintings, and measure visual similarity between frames for smooth sequence transitions. We demonstrate coherent segmentation and stylization over a variety of home videos.
{"title":"Video stylization for digital ambient displays of home movies","authors":"T. Wang, J. Collomosse, David Slatter, P. Cheatle, D. Greig","doi":"10.1145/1809939.1809955","DOIUrl":"https://doi.org/10.1145/1809939.1809955","url":null,"abstract":"Falling hardware costs have prompted an explosion in casual video capture by domestic users. Yet, this video is infrequently accessed post-capture and often lies dormant on users' PCs. We present a system to breathe life into home video repositories, drawing upon artistic stylization to create a \"Digital Ambient Display\" that automatically selects, stylizes and transitions between videos in a semantically meaningful sequence. We present a novel algorithm based on multi-label graph cut for segmenting video into temporally coherent region maps. These maps are used to both stylize video into cartoons and paintings, and measure visual similarity between frames for smooth sequence transitions. We demonstrate coherent segmentation and stylization over a variety of home videos.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}