首页 > 最新文献

International Symposium on Non-Photorealistic Animation and Rendering最新文献

英文 中文
Quantifying visual abstraction quality for stipple drawings 点画视觉抽象质量的量化
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092923
Marc Spicker, F. Hahn, Thomas Lindemeier, D. Saupe, O. Deussen
We investigate how the perceived abstraction quality of stipple illustrations is related to the number of points used to create them. Since it is difficult to find objective functions that quantify the visual quality of such illustrations, we gather comparative data by a crowdsourcing user study and employ a paired comparison model to deduce absolute quality values. Based on this study we show that it is possible to predict the perceived quality of stippled representations based on the properties of an input image. Our results are related to Weber-Fechner's law from psychophysics and indicate a logarithmic relation between numbers of points and perceived abstraction quality. We give guidance for the number of stipple points that is typically enough to represent an input image well.
我们研究了点画插图的感知抽象质量如何与用于创建它们的点的数量相关。由于很难找到量化此类插图视觉质量的目标函数,我们通过众包用户研究收集比较数据,并采用配对比较模型来推断绝对质量值。基于这一研究,我们表明可以根据输入图像的属性来预测点画表示的感知质量。我们的结果与心理物理学的Weber-Fechner定律有关,并表明点数与感知抽象质量之间存在对数关系。我们给出了通常足以很好地表示输入图像的点画点数量的指导。
{"title":"Quantifying visual abstraction quality for stipple drawings","authors":"Marc Spicker, F. Hahn, Thomas Lindemeier, D. Saupe, O. Deussen","doi":"10.1145/3092919.3092923","DOIUrl":"https://doi.org/10.1145/3092919.3092923","url":null,"abstract":"We investigate how the perceived abstraction quality of stipple illustrations is related to the number of points used to create them. Since it is difficult to find objective functions that quantify the visual quality of such illustrations, we gather comparative data by a crowdsourcing user study and employ a paired comparison model to deduce absolute quality values. Based on this study we show that it is possible to predict the perceived quality of stippled representations based on the properties of an input image. Our results are related to Weber-Fechner's law from psychophysics and indicate a logarithmic relation between numbers of points and perceived abstraction quality. We give guidance for the number of stipple points that is typically enough to represent an input image well.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114742723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Depth-aware neural style transfer 深度感知神经风格迁移
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092924
Xiao-Chang Liu, Ming-Ming Cheng, Yu-Kun Lai, Paul L. Rosin
Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.
近来,神经风格迁移受到了广泛关注,并取得了惊人的成果。Johnson等人提出了一种有效的解决方案,通过定义和优化感知损失函数来训练前馈卷积神经网络。这些方法通常基于从预训练的神经网络中提取的高级特征,其中损失函数包含两个组成部分:风格损失和内容损失。然而,这种预训练的网络最初是为物体识别而设计的,因此高级特征往往只关注主要目标而忽略其他细节。因此,当输入图像包含可能在不同深度的多个对象时,得到的图像往往不令人满意,因为图像布局被破坏,前景和背景之间的边界以及不同对象之间变得模糊。我们观察到深度图有效地反映了图像中的空间分布,在风格化后保留内容图像的深度图有助于生成保留其语义内容的图像。在本文中,我们介绍了一种新的神经风格迁移方法,该方法将深度保留作为额外的损失,在执行风格迁移时保留整体图像布局。
{"title":"Depth-aware neural style transfer","authors":"Xiao-Chang Liu, Ming-Ming Cheng, Yu-Kun Lai, Paul L. Rosin","doi":"10.1145/3092919.3092924","DOIUrl":"https://doi.org/10.1145/3092919.3092924","url":null,"abstract":"Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121763882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Whole-cloth quilting patterns from photographs 照片中的整布绗缝图案
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092925
Chenxi Liu, J. Hodgins, J. McCann
Whole-cloth quilts are decorative and functional artifacts made of plain cloth embellished with complicated stitching patterns. We describe a method that can automatically create a sewing pattern for a whole-cloth quilt from a photograph. Our technique begins with a segmented image, extracts desired and optional edges, and creates a continuous sewing path by approximately solving the Rural Postman Problem (RPP). In addition to many example quilts, we provide visual and numerical comparisons to previous singleline illustration approaches.
全布被是一种装饰性和功能性的人工制品,由素布制成,装饰有复杂的拼接图案。我们描述了一种可以从照片中自动创建整块布被子的缝纫图案的方法。我们的技术从分割图像开始,提取所需的和可选的边缘,并通过近似解决农村邮递员问题(RPP)创建连续的缝纫路径。除了许多例子被子,我们提供了视觉和数值比较以前的单线说明方法。
{"title":"Whole-cloth quilting patterns from photographs","authors":"Chenxi Liu, J. Hodgins, J. McCann","doi":"10.1145/3092919.3092925","DOIUrl":"https://doi.org/10.1145/3092919.3092925","url":null,"abstract":"Whole-cloth quilts are decorative and functional artifacts made of plain cloth embellished with complicated stitching patterns. We describe a method that can automatically create a sewing pattern for a whole-cloth quilt from a photograph. Our technique begins with a segmented image, extracts desired and optional edges, and creates a continuous sewing path by approximately solving the Rural Postman Problem (RPP). In addition to many example quilts, we provide visual and numerical comparisons to previous singleline illustration approaches.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"128 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133391071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Edge- and substrate-based effects for watercolor stylization 水彩风格化的边缘和基材为基础的效果
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092928
Santiago E. Montesdeoca, S. H. Soon, P. Bénard, Romain Vergne, J. Thollot, Hannes Rall, Davide Benvenuti
We investigate characteristic edge- and substrate-based effects for watercolor stylization. These two fundamental elements of painted art play a significant role in traditional watercolors and highly influence the pigment's behavior and application. Yet a detailed consideration of these specific elements for the stylization of 3D scenes has not been attempted before. Through this investigation, we contribute to the field by presenting ways to emulate two novel effects: dry-brush and gaps & overlaps. By doing so, we also found ways to improve upon well-studied watercolor effects such as edge-darkening and substrate granulation. Finally, we integrated controllable external lighting influences over the watercolorized result, together with other previously researched watercolor effects. These effects are combined through a direct stylization pipeline to produce sophisticated watercolor imagery, which retains spatial coherence in object-space and is locally controllable in real-time.
我们研究特征的边缘和基材为基础的水彩风格化的影响。这两个绘画艺术的基本元素在传统水彩画中发挥着重要作用,并对颜料的行为和应用产生了很大的影响。然而,详细考虑这些特定元素的风格化的3D场景还没有尝试之前。通过这项调查,我们通过提出模拟两种新效果的方法来为该领域做出贡献:干刷和间隙和重叠。通过这样做,我们还找到了改进水彩效果的方法,如边缘变暗和基材颗粒化。最后,我们将可控的外部灯光影响与其他先前研究过的水彩效果结合起来。这些效果通过直接的风格化管道组合在一起,产生复杂的水彩图像,在对象空间中保持空间一致性,并在实时中局部可控。
{"title":"Edge- and substrate-based effects for watercolor stylization","authors":"Santiago E. Montesdeoca, S. H. Soon, P. Bénard, Romain Vergne, J. Thollot, Hannes Rall, Davide Benvenuti","doi":"10.1145/3092919.3092928","DOIUrl":"https://doi.org/10.1145/3092919.3092928","url":null,"abstract":"We investigate characteristic edge- and substrate-based effects for watercolor stylization. These two fundamental elements of painted art play a significant role in traditional watercolors and highly influence the pigment's behavior and application. Yet a detailed consideration of these specific elements for the stylization of 3D scenes has not been attempted before. Through this investigation, we contribute to the field by presenting ways to emulate two novel effects: dry-brush and gaps & overlaps. By doing so, we also found ways to improve upon well-studied watercolor effects such as edge-darkening and substrate granulation. Finally, we integrated controllable external lighting influences over the watercolorized result, together with other previously researched watercolor effects. These effects are combined through a direct stylization pipeline to produce sophisticated watercolor imagery, which retains spatial coherence in object-space and is locally controllable in real-time.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125226663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Neural style transfer: a paradigm shift for image-based artistic rendering? 神经风格转移:基于图像的艺术呈现的范式转变?
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092920
Amir Semmo, Tobias Isenberg, J. Döllner
In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.
在这篇元论文中,我们讨论了基于神经风格转移(NST)的基于图像的艺术渲染(IB-AR),并认为,虽然NST可能代表了IB-AR的范式转变,但它也必须发展成为一种考虑设计方面和艺术作品生产机制的互动工具。在过去的几十年里,IB-AR在视觉传达方面受到了极大的关注,涵盖了大量模仿艺术媒体吸引力的技术。基于示例的渲染代表了IB-AR中最有前途的范例之一,可以(半)自动地以高保真度模拟艺术媒体,但到目前为止,它仍然受到限制,因为它依赖于预定义的图像对进行训练,或者仅通知低级图像特征进行纹理传输。深度学习的进步表明,通过激活神经网络层来匹配内容和风格统计,从而减轻了这些限制,从而使广义风格迁移变得可行。我们在IB-AR分类法中对风格迁移进行了分类,然后提出了一个符号学结构,以针对NPAR的重大挑战为nst制定技术研究议程。我们最后讨论了nst的潜力,从而确定了诸如休闲创意和艺术制作等应用。
{"title":"Neural style transfer: a paradigm shift for image-based artistic rendering?","authors":"Amir Semmo, Tobias Isenberg, J. Döllner","doi":"10.1145/3092919.3092920","DOIUrl":"https://doi.org/10.1145/3092919.3092920","url":null,"abstract":"In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134079374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Real-time panorama maps 实时全景地图
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092922
S. Brown, F. Samavati
Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.
全景地图是在旅游目的地经常看到的地形的风格化绘画。它们很难创建,因为它们既具有艺术性,又以真实的地理数据为基础。在本文中,我们提出了在实时应用程序中以Heinrich Berann的全景地图风格呈现真实世界数据的技术。我们分析了几幅Berann的画作,以确定所使用的艺术元素。我们使用这种分析来形成模仿全景地图风格的算法,重点是复制地形变形、扭曲投影、地形着色、树木笔触、水渲染和大气散射。在我们的方法中,我们使用免费的数字地球数据来绘制交互式全景地图,而无需进一步的设计工作。
{"title":"Real-time panorama maps","authors":"S. Brown, F. Samavati","doi":"10.1145/3092919.3092922","DOIUrl":"https://doi.org/10.1145/3092919.3092922","url":null,"abstract":"Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117291962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mixed illumination analysis in single image for interactive color grading 用于交互色彩分级的单幅图像混合照明分析
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092927
Sylvain Duchêne, Carlos Aliaga, T. Pouli, P. Pérez
Colorists often use keying or rotoscoping tools to access and edit particular colors or parts of the scene. Although necessary, this is a time-consuming and potentially imprecise process, as it is not possible to fully separate the influence of light sources in the scene from the colors of objects and actors within it. To simplify this process, we present a new solution for automatically estimating the color and influence of multiple illuminants, based on image variation analysis. Using this information, we present a new color grading tool for simply and interactively editing the colors of detected illuminants, which fits naturally in color grading workflows. We demonstrate the use of our solution in several scenes, evaluating the quality of our results by means of a psychophysical study.
色彩师经常使用键控或旋转观察工具来访问和编辑特定的颜色或场景的部分。虽然这是必要的,但这是一个耗时且可能不精确的过程,因为不可能将场景中光源的影响与物体和演员的颜色完全分开。为了简化这一过程,我们提出了一种基于图像变化分析的多光源颜色和影响自动估计的新方法。利用这些信息,我们提出了一种新的颜色分级工具,用于简单和交互式地编辑检测到的光源的颜色,它自然适合颜色分级工作流程。我们在几个场景中演示了我们的解决方案的使用,通过心理物理学研究来评估我们结果的质量。
{"title":"Mixed illumination analysis in single image for interactive color grading","authors":"Sylvain Duchêne, Carlos Aliaga, T. Pouli, P. Pérez","doi":"10.1145/3092919.3092927","DOIUrl":"https://doi.org/10.1145/3092919.3092927","url":null,"abstract":"Colorists often use keying or rotoscoping tools to access and edit particular colors or parts of the scene. Although necessary, this is a time-consuming and potentially imprecise process, as it is not possible to fully separate the influence of light sources in the scene from the colors of objects and actors within it. To simplify this process, we present a new solution for automatically estimating the color and influence of multiple illuminants, based on image variation analysis. Using this information, we present a new color grading tool for simply and interactively editing the colors of detected illuminants, which fits naturally in color grading workflows. We demonstrate the use of our solution in several scenes, evaluating the quality of our results by means of a psychophysical study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A generic framework for the structured abstraction of images 对图像进行结构化抽象的通用框架
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092930
Noura Faraj, Gui-Song Xia, J. Delon, Y. Gousseau
Structural properties are important clues for non-photorealistic representations of digital images. Therefore, image analysis tools have been intensively used either to produce stroke-based renderings or to yield abstractions of images. In this work, we propose to use a hierarchical and geometrical image representation, called a topographic map, made of shapes organized in a tree structure. There are two main advantages of this analysis tool. Firstly, it is able to deal with all scales, so that every shape of the input image is represented. Secondly, it accounts for the inclusion properties within the image. By iteratively performing simple local operations on the shapes (removal, rotation, scaling, replacement...), we are able to generate abstract renderings of digital photographs ranging from geometrical abstraction and painting-like effects to style transfer, using the same framework. In particular, results show that it is possible to create abstract images evoking Malevitchs Suprematist school, while remaining grounded in the structure of digital images, by replacing all the shapes in the tree by simple geometric shapes.
结构特性是数字图像非真实感表征的重要线索。因此,图像分析工具已被广泛用于生成基于笔画的渲染或生成图像的抽象。在这项工作中,我们建议使用分层和几何图像表示,称为地形图,由树状结构组织的形状组成。这个分析工具有两个主要优点。首先,它能够处理所有尺度,使输入图像的每个形状都能被表示出来。其次,它考虑了图像中的包含属性。通过迭代地对形状执行简单的局部操作(移除、旋转、缩放、替换……),我们能够使用相同的框架生成数字照片的抽象渲染,从几何抽象和绘画效果到风格转移。特别是,研究结果表明,通过用简单的几何形状替换树中的所有形状,可以创建唤起马列维奇至上主义学派的抽象图像,同时仍以数字图像的结构为基础。
{"title":"A generic framework for the structured abstraction of images","authors":"Noura Faraj, Gui-Song Xia, J. Delon, Y. Gousseau","doi":"10.1145/3092919.3092930","DOIUrl":"https://doi.org/10.1145/3092919.3092930","url":null,"abstract":"Structural properties are important clues for non-photorealistic representations of digital images. Therefore, image analysis tools have been intensively used either to produce stroke-based renderings or to yield abstractions of images. In this work, we propose to use a hierarchical and geometrical image representation, called a topographic map, made of shapes organized in a tree structure. There are two main advantages of this analysis tool. Firstly, it is able to deal with all scales, so that every shape of the input image is represented. Secondly, it accounts for the inclusion properties within the image. By iteratively performing simple local operations on the shapes (removal, rotation, scaling, replacement...), we are able to generate abstract renderings of digital photographs ranging from geometrical abstraction and painting-like effects to style transfer, using the same framework. In particular, results show that it is possible to create abstract images evoking Malevitchs Suprematist school, while remaining grounded in the structure of digital images, by replacing all the shapes in the tree by simple geometric shapes.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124954569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Benchmarking non-photorealistic rendering of portraits 基准非逼真的肖像渲染
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092921
Paul L. Rosin, D. Mould, Itamar Berger, J. Collomosse, Yu-Kun Lai, Chuan Li, Hua Li, Ariel Shamir, Michael Wand, T. Wang, H. Winnemöller
We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of difficulty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portrait-specific stylisation algorithms, two general-purpose stylisation algorithms, and one general learning based stylisation algorithm to the first level of the benchmark, corresponding to the type of constrained images that have often been used in portrait-specific work. We found that the existing methods are generally effective on this new image set, demonstrating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portrait-specific algorithms over general-purpose algorithms: portrait-specific algorithms can use domain-specific information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we provide some thoughts on systematically extending the benchmark to higher levels of difficulty.
我们提供了一组图像,以帮助NPR从业者评估他们基于图像的肖像风格化算法。使用标准集既便于与其他方法进行比较,又有助于确保所呈现的结果具有代表性。我们给出了两个难度级别,每个级别由系统选择的20个图像组成,以便提供几个可能的肖像特征的良好覆盖。我们将三种现有的特定于肖像的风格化算法、两种通用的风格化算法和一种基于学习的通用风格化算法应用于基准的第一级,对应于经常在特定于肖像的工作中使用的约束图像的类型。我们发现,现有的方法在这个新的图像集上通常是有效的,这表明一级基准是可处理的;挑战仍然是第二级。结果显示,与通用算法相比,特定于肖像的算法具有几个优势:特定于肖像的算法可以使用特定于领域的信息来保留关键细节,如眼睛,并消除无关的细节,并且由于潜在的面部模型,它们有更多的语义上有意义的抽象空间。最后,我们提供了一些关于系统地将基准扩展到更高难度水平的想法。
{"title":"Benchmarking non-photorealistic rendering of portraits","authors":"Paul L. Rosin, D. Mould, Itamar Berger, J. Collomosse, Yu-Kun Lai, Chuan Li, Hua Li, Ariel Shamir, Michael Wand, T. Wang, H. Winnemöller","doi":"10.1145/3092919.3092921","DOIUrl":"https://doi.org/10.1145/3092919.3092921","url":null,"abstract":"We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of difficulty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portrait-specific stylisation algorithms, two general-purpose stylisation algorithms, and one general learning based stylisation algorithm to the first level of the benchmark, corresponding to the type of constrained images that have often been used in portrait-specific work. We found that the existing methods are generally effective on this new image set, demonstrating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portrait-specific algorithms over general-purpose algorithms: portrait-specific algorithms can use domain-specific information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we provide some thoughts on systematically extending the benchmark to higher levels of difficulty.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Pigment-based recoloring of watercolor paintings 以颜料为基础的水彩画重着色
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092926
Elad Aharoni-Mack, Yakov Shambik, Dani Lischinski
The color palette used by an artist when creating a painting is an important tool for expressing emotion, directing attention, and more. However, choosing a palette is an intricate task that requires considerable skill and experience. In this work, we introduce a new tool designed to allow artists to experiment with alternative color palettes for existing watercolor paintings. This could be useful for generating alternative renditions for an existing painting, or for aiding in the selection of a palette for a new painting, related to an existing one. Our tool first estimates the original pigment-based color palette used to create the painting, and then decomposes the painting into a collection of pigment channels, each corresponding to a single palette color. In both of these tasks, we employ a version of the Kubelka-Munk model, which predicts the reflectance of a given mixture of pigments. Each channel in the decomposition is a piecewise-smooth map that specifies the concentration of one of the colors in the palette across the image. Another estimated map specifies the total thickness of the pigments across the image. The mixture of these pigment channels, also according to the Kubelka-Munk model, reconstructs the original painting. The artist is then able to manipulate the individual palette colors, obtaining results by remixing the pigment channels at interactive rates.
艺术家在创作绘画时使用的调色板是表达情感、引导注意力等的重要工具。然而,选择调色板是一项复杂的任务,需要相当的技能和经验。在这项工作中,我们介绍了一个新的工具,旨在让艺术家尝试现有的水彩画的替代调色板。这对于为现有绘画生成替代再现或帮助为与现有绘画相关的新绘画选择调色板可能很有用。我们的工具首先估计用于创建绘画的原始基于颜料的调色板,然后将绘画分解为颜料通道的集合,每个通道对应于单个调色板颜色。在这两项任务中,我们采用了库贝尔卡-蒙克模型的一个版本,该模型预测了给定颜料混合物的反射率。分解中的每个通道都是一个分段平滑映射,它指定调色板中一种颜色在图像上的浓度。另一个估计地图指定了整个图像上颜料的总厚度。这些颜料通道的混合,也根据库别卡-蒙克模型,重建了原画。然后,艺术家能够操纵单个调色板的颜色,通过以交互速率重新混合颜料通道获得结果。
{"title":"Pigment-based recoloring of watercolor paintings","authors":"Elad Aharoni-Mack, Yakov Shambik, Dani Lischinski","doi":"10.1145/3092919.3092926","DOIUrl":"https://doi.org/10.1145/3092919.3092926","url":null,"abstract":"The color palette used by an artist when creating a painting is an important tool for expressing emotion, directing attention, and more. However, choosing a palette is an intricate task that requires considerable skill and experience. In this work, we introduce a new tool designed to allow artists to experiment with alternative color palettes for existing watercolor paintings. This could be useful for generating alternative renditions for an existing painting, or for aiding in the selection of a palette for a new painting, related to an existing one. Our tool first estimates the original pigment-based color palette used to create the painting, and then decomposes the painting into a collection of pigment channels, each corresponding to a single palette color. In both of these tasks, we employ a version of the Kubelka-Munk model, which predicts the reflectance of a given mixture of pigments. Each channel in the decomposition is a piecewise-smooth map that specifies the concentration of one of the colors in the palette across the image. Another estimated map specifies the total thickness of the pigments across the image. The mixture of these pigment channels, also according to the Kubelka-Munk model, reconstructs the original painting. The artist is then able to manipulate the individual palette colors, obtaining results by remixing the pigment channels at interactive rates.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"511 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123069448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
International Symposium on Non-Photorealistic Animation and Rendering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1