Non-Photorealistic Rendering covers a wide range of visual effects, and much work has been dedicated to create digital representations of traditional media --- either through artist controls or programmatically. We explore a variation of this work, which aims to create digital media that takes the metaphors of traditional media but applies them in ways that have no physical equivalent --- thus expanding notions of what digital media can represent. Smoke Brush is a system for applying smoke-like brush strokes to a digital canvas. Using Smoke Brush, artists can add animated, constrained smoke effects to existing pictures or create images represented entirely by smoke. Our drawing system produces artifacts that are realized as animated gifs --- a commonly available digital format used in cinemagraphs. We also describe a technique that produces smooth continuous motion in these looped animations that is faithful to the original artist input.
{"title":"Smoke Brush","authors":"Sarah Abraham, D. Fussell","doi":"10.1145/2630397.2630404","DOIUrl":"https://doi.org/10.1145/2630397.2630404","url":null,"abstract":"Non-Photorealistic Rendering covers a wide range of visual effects, and much work has been dedicated to create digital representations of traditional media --- either through artist controls or programmatically. We explore a variation of this work, which aims to create digital media that takes the metaphors of traditional media but applies them in ways that have no physical equivalent --- thus expanding notions of what digital media can represent. Smoke Brush is a system for applying smoke-like brush strokes to a digital canvas. Using Smoke Brush, artists can add animated, constrained smoke effects to existing pictures or create images represented entirely by smoke. Our drawing system produces artifacts that are realized as animated gifs --- a commonly available digital format used in cinemagraphs. We also describe a technique that produces smooth continuous motion in these looped animations that is faithful to the original artist input.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125196295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I argue in favor of a systematic subjective evaluation of non-photorealistic images. Objective measurements are hard to design, and quantitative user studies are problematic for a multiplicity of reasons. Subjective evaluations are not quantitative but are faster to conduct and offer the chance to dig into subtleties that are obscured by numerical scores. By carefully laying out the important elements of the intended image style, and then evaluating their results according to their adherence to the style, researchers can produce convincing evaluations with a manageable level of effort.
{"title":"Authorial subjective evaluation of non-photorealistic images","authors":"D. Mould","doi":"10.1145/2630397.2630400","DOIUrl":"https://doi.org/10.1145/2630397.2630400","url":null,"abstract":"I argue in favor of a systematic subjective evaluation of non-photorealistic images. Objective measurements are hard to design, and quantitative user studies are problematic for a multiplicity of reasons. Subjective evaluations are not quantitative but are faster to conduct and offer the chance to dig into subtleties that are obscured by numerical scores. By carefully laying out the important elements of the intended image style, and then evaluating their results according to their adherence to the style, researchers can produce convincing evaluations with a manageable level of effort.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126750707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Browning, Connelly Barnes, Samantha Ritter, Adam Finkelstein
We present a method that combines hand-drawn artwork with fluid simulations to produce animated fluids in the visual style of the artwork. Given a fluid simulation and a set of keyframes rendered by the artist in any medium, our system produces a set of in-betweens that visually matches the style of the keyframes and roughly follows the motion from the underlying simulation. Our method leverages recent advances in patch-based regenerative morphing and image melding to produce temporally coherent sequences with visual fidelity to the target medium. Because direct application of these methods results in motion that is generally not fluid-like, we adapt them to produce motion closely matching that of the underlying simulation. The resulting animation is visually and temporally coherent, stylistically consistent with the given keyframes, and approximately matches the motion from the simulation. We demonstrate the method with animations in a variety of visual styles.
{"title":"Stylized keyframe animation of fluid simulations","authors":"Mark Browning, Connelly Barnes, Samantha Ritter, Adam Finkelstein","doi":"10.1145/2630397.2630406","DOIUrl":"https://doi.org/10.1145/2630397.2630406","url":null,"abstract":"We present a method that combines hand-drawn artwork with fluid simulations to produce animated fluids in the visual style of the artwork. Given a fluid simulation and a set of keyframes rendered by the artist in any medium, our system produces a set of in-betweens that visually matches the style of the keyframes and roughly follows the motion from the underlying simulation. Our method leverages recent advances in patch-based regenerative morphing and image melding to produce temporally coherent sequences with visual fidelity to the target medium. Because direct application of these methods results in motion that is generally not fluid-like, we adapt them to produce motion closely matching that of the underlying simulation. The resulting animation is visually and temporally coherent, stylistically consistent with the given keyframes, and approximately matches the motion from the simulation. We demonstrate the method with animations in a variety of visual styles.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingwan Lu, S. DiVerdi, Willa Chen, Connelly Barnes, Adam Finkelstein
The color of composited pigments in digital painting is generally computed one of two ways: either alpha blending in RGB, or the Kubelka-Munk equation (KM). The former fails to reproduce paint like appearances, while the latter is difficult to use. We present a data-driven pigment model that reproduces arbitrary compositing behavior by interpolating sparse samples in a high dimensional space. The input is an of a color chart, which provides the composition samples. We propose two different prediction algorithms, one doing simple interpolation using radial basis functions (RBF), and another that trains a parametric model based on the KM equation to compute novel values. We show that RBF is able to reproduce arbitrary compositing behaviors, even non-paint-like such as additive blending, while KM compositing is more robust to acquisition noise and can generalize results over a broader range of values.
{"title":"RealPigment: paint compositing by example","authors":"Jingwan Lu, S. DiVerdi, Willa Chen, Connelly Barnes, Adam Finkelstein","doi":"10.1145/2630397.2630401","DOIUrl":"https://doi.org/10.1145/2630397.2630401","url":null,"abstract":"The color of composited pigments in digital painting is generally computed one of two ways: either alpha blending in RGB, or the Kubelka-Munk equation (KM). The former fails to reproduce paint like appearances, while the latter is difficult to use. We present a data-driven pigment model that reproduces arbitrary compositing behavior by interpolating sparse samples in a high dimensional space. The input is an of a color chart, which provides the composition samples. We propose two different prediction algorithms, one doing simple interpolation using radial basis functions (RBF), and another that trains a parametric model based on the KM equation to compute novel values. We show that RBF is able to reproduce arbitrary compositing behaviors, even non-paint-like such as additive blending, while KM compositing is more robust to acquisition noise and can generalize results over a broader range of values.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124450519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.
{"title":"ElastiFace: matching and blending textured faces","authors":"E. Zell, M. Botsch","doi":"10.1145/2486042.2486045","DOIUrl":"https://doi.org/10.1145/2486042.2486045","url":null,"abstract":"In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"Spec No 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130964165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pixel artists rasterize vector shapes by hand to minimize artifacts at low resolutions and emphasize the aesthetics of visible pixels. We describe Superpixelator, an algorithm that automates this process by rasterizing vector line art at a low resolution pixel art style. Our technique successfully eliminates most rasterization artifacts and draws smoother curves. To draw shapes more effectively, we use optimization techniques to preserve shape properties such as symmetry, aspect ratio, and sharp angles. Our algorithm also supports "manual antialiasing," the style of antialiasing used in pixel art. Professional pixel artists report that Superpixelator's results are as good, or better, than hand-rasterized drawings by artists.
{"title":"Rasterizing and antialiasing vector line art in the pixel art style","authors":"Tiffany Inglis, Daniel Vogel, C. Kaplan","doi":"10.1145/2486042.2486044","DOIUrl":"https://doi.org/10.1145/2486042.2486044","url":null,"abstract":"Pixel artists rasterize vector shapes by hand to minimize artifacts at low resolutions and emphasize the aesthetics of visible pixels. We describe Superpixelator, an algorithm that automates this process by rasterizing vector line art at a low resolution pixel art style. Our technique successfully eliminates most rasterization artifacts and draws smoother curves. To draw shapes more effectively, we use optimization techniques to preserve shape properties such as symmetry, aspect ratio, and sharp angles. Our algorithm also supports \"manual antialiasing,\" the style of antialiasing used in pixel art. Professional pixel artists report that Superpixelator's results are as good, or better, than hand-rasterized drawings by artists.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, an increasing number of example-based Geometric Texture Synthesis (GTS) algorithms have been proposed. However, there have been few attempts to evaluate these algorithms rigorously. We are driven by this lack of validation and the simplicity of the GTS problem to look closer at perceptual similarity between geometric arrangements. Using samples from a geological database, our research first establishes a dataset of geometric arrangements gathered from multiple synthesis sources. We then employ the dataset in two evaluation studies. Collectively these empirical methods provide formal foundations for perceptual studies in GTS, insight into the robustness of GTS algorithms and a better understanding of similarity in the context of geometric texture arrangements.
{"title":"Towards effective evaluation of geometric texture synthesis algorithms","authors":"Zainab Almeraj, C. Kaplan, P. Asente","doi":"10.1145/2486042.2486043","DOIUrl":"https://doi.org/10.1145/2486042.2486043","url":null,"abstract":"In recent years, an increasing number of example-based Geometric Texture Synthesis (GTS) algorithms have been proposed. However, there have been few attempts to evaluate these algorithms rigorously. We are driven by this lack of validation and the simplicity of the GTS problem to look closer at perceptual similarity between geometric arrangements. Using samples from a geological database, our research first establishes a dataset of geometric arrangements gathered from multiple synthesis sources. We then employ the dataset in two evaluation studies. Collectively these empirical methods provide formal foundations for perceptual studies in GTS, insight into the robustness of GTS algorithms and a better understanding of similarity in the context of geometric texture arrangements.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122161589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-04DOI: 10.2312/PE/NPAR/NPAR12/047-056
L. Northam, P. Asente, C. Kaplan
We present a method for stylizing stereoscopic 3D images that guarantees consistency between the left and right views. Our method decomposes the left and right views of an input image into discretized disparity layers and merges the corresponding layers from the left and right views into a single layer where stylization takes place. We then construct new stylized left and right views by compositing portions of the stylized layers. Because the left and right views come from the same source layers, our method eliminates common artifacts that cause viewer discomfort. We also present a stereoscopic 3D painterly rendering algorithm tailored to our layer-based approach. This method uses disparity information to assist in stroke creation so that strokes follow surface geometry without ignoring painted surface patterns. Finally, we conduct a user study that demonstrates that our approach to stereoscopic 3D image stylization leads to images that are more comfortable to view than those created using other techniques.
{"title":"Consistent stylization and painterly rendering of stereoscopic 3D images","authors":"L. Northam, P. Asente, C. Kaplan","doi":"10.2312/PE/NPAR/NPAR12/047-056","DOIUrl":"https://doi.org/10.2312/PE/NPAR/NPAR12/047-056","url":null,"abstract":"We present a method for stylizing stereoscopic 3D images that guarantees consistency between the left and right views. Our method decomposes the left and right views of an input image into discretized disparity layers and merges the corresponding layers from the left and right views into a single layer where stylization takes place. We then construct new stylized left and right views by compositing portions of the stylized layers. Because the left and right views come from the same source layers, our method eliminates common artifacts that cause viewer discomfort. We also present a stereoscopic 3D painterly rendering algorithm tailored to our layer-based approach. This method uses disparity information to assist in stroke creation so that strokes follow surface geometry without ignoring painted surface patterns. Finally, we conduct a user study that demonstrates that our approach to stereoscopic 3D image stylization leads to images that are more comfortable to view than those created using other techniques.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-06-04DOI: 10.2312/PE/NPAR/NPAR12/011-019
Peeraya Sripian, Yasushi Yamaguchi
Hybrid image is the image that changes interpretation according to viewing distance. By simply extracting the low and the high spatial frequency bands from two source images, the combination result image can be interpreted differently by viewing distance. This research finds the way to allow construction of hybrid image regardless of the source image's shape. Without the need to carefully pick the two images to be superimposed, hybrid image can be extended to use with any kind of image contents. There are two approaches for accomplishing shape-free hybrid image. Noise-inserted approach forces observers to perceive alternative low frequency image as meaningless noises in a close viewing distance, by manipulating contrast and details in the high frequency image. Color-inserted approach helps attract the visual attention for the high frequency image perception is also introduced in this research. Finally, hybrid image recognition experiment proves that our proposed method yield a better recognition rate over the original method while preserving hybrid image characteristic.
{"title":"Shape-free hybrid image","authors":"Peeraya Sripian, Yasushi Yamaguchi","doi":"10.2312/PE/NPAR/NPAR12/011-019","DOIUrl":"https://doi.org/10.2312/PE/NPAR/NPAR12/011-019","url":null,"abstract":"Hybrid image is the image that changes interpretation according to viewing distance. By simply extracting the low and the high spatial frequency bands from two source images, the combination result image can be interpreted differently by viewing distance. This research finds the way to allow construction of hybrid image regardless of the source image's shape. Without the need to carefully pick the two images to be superimposed, hybrid image can be extended to use with any kind of image contents. There are two approaches for accomplishing shape-free hybrid image. Noise-inserted approach forces observers to perceive alternative low frequency image as meaningless noises in a close viewing distance, by manipulating contrast and details in the high frequency image. Color-inserted approach helps attract the visual attention for the high frequency image perception is also introduced in this research. Finally, hybrid image recognition experiment proves that our proposed method yield a better recognition rate over the original method while preserving hybrid image characteristic.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132775313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.
{"title":"Pixelating vector line art","authors":"Tiffany Inglis, C. Kaplan","doi":"10.1145/2342896.2343021","DOIUrl":"https://doi.org/10.1145/2342896.2343021","url":null,"abstract":"Creating pixel art is a laborious process that requires artists to place individual pixels by hand. Although many image editors provide vector-to-raster conversions, the results produced do not meet the standards of pixel art: artifacts such as jaggies or broken lines frequently occur. We describe a novel Pixelation algorithm that rasterizes vector line art while adhering to established conventions used by pixel artists. We compare our results through a user study to those generated by Adobe Illustrator and Photoshop, as well as hand-drawn samples by both amateur and professional pixel artists.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128767302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}