首页 > 最新文献

International Symposium on Non-Photorealistic Animation and Rendering最新文献

英文 中文
Dynamic stylized shading primitives 动态风格化的底纹原语
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024693
David Vanderhaeghe, Romain Vergne, Pascal Barla, William V. Baxter
Shading appearance in illustrations, comics and graphic novels is designed to convey illumination, material and surface shape characteristics at once. Moreover, shading may vary depending on different configurations of surface distance, lighting, character expressions, timing of the action, to articulate storytelling or draw attention to a part of an object. In this paper, we present a method that imitates such expressive stylized shading techniques in dynamic 3D scenes, and which offers a simple and flexible means for artists to design and tweak the shading appearance and its dynamic behavior. The key contribution of our approach is to seamlessly vary appearance by using a combination of shading primitives that take into account lighting direction, material characteristics and surface features. We demonstrate their flexibility in a number of scenarios: minimal shading, comics or cartoon rendering, glossy and anisotropic material effects; including a variety of dynamic variations based on orientation, timing or depth. Our prototype implementation combines shading primitives with a layered approach and runs in real-time on the GPU.
插图、漫画和图画小说中的阴影外观旨在同时传达照明、材料和表面形状特征。此外,阴影可能会根据表面距离、光照、人物表情、动作时机的不同配置而变化,以阐明故事情节或将注意力吸引到物体的一部分。在本文中,我们提出了一种在动态3D场景中模仿这种富有表现力的风格化着色技术的方法,为艺术家提供了一种简单灵活的方法来设计和调整着色外观及其动态行为。我们的方法的关键贡献是通过使用考虑光照方向、材料特性和表面特征的阴影原语组合来无缝地改变外观。我们在许多场景中展示了它们的灵活性:最小阴影,漫画或卡通渲染,光滑和各向异性材料效果;包括基于方向、时间或深度的各种动态变化。我们的原型实现将着色原语与分层方法相结合,并在GPU上实时运行。
{"title":"Dynamic stylized shading primitives","authors":"David Vanderhaeghe, Romain Vergne, Pascal Barla, William V. Baxter","doi":"10.1145/2024676.2024693","DOIUrl":"https://doi.org/10.1145/2024676.2024693","url":null,"abstract":"Shading appearance in illustrations, comics and graphic novels is designed to convey illumination, material and surface shape characteristics at once. Moreover, shading may vary depending on different configurations of surface distance, lighting, character expressions, timing of the action, to articulate storytelling or draw attention to a part of an object. In this paper, we present a method that imitates such expressive stylized shading techniques in dynamic 3D scenes, and which offers a simple and flexible means for artists to design and tweak the shading appearance and its dynamic behavior. The key contribution of our approach is to seamlessly vary appearance by using a combination of shading primitives that take into account lighting direction, material characteristics and surface features. We demonstrate their flexibility in a number of scenarios: minimal shading, comics or cartoon rendering, glossy and anisotropic material effects; including a variety of dynamic variations based on orientation, timing or depth. Our prototype implementation combines shading primitives with a layered approach and runs in real-time on the GPU.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129915305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Spatio-temporal analysis for parameterizing animated lines 参数化动画线的时空分析
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024690
Bert Buchholz, Noura Faraj, Sylvain Paris, E. Eisemann, T. Boubekeur
We describe a method to parameterize lines generated from animated 3D models in the context of animated line drawings. Cartoons and mechanical illustrations are popular subjects of non-photorealistic drawings and are often generated from 3D models. Adding texture to the lines, for instance to depict brush strokes or dashed lines, enables greater expressiveness, e.g. to distinguish between visible and hidden lines. However, dynamic visibility events and the evolving shape of the lines raise issues that have been only partially explored so far. In this paper, we assume that the entire 3D animation is known ahead of time, as is typically the case for feature animations and off-line rendering. At the core of our method is a geometric formulation of the problem as a parameterization of the space-time surface swept by a 2D line during the animation. First, we build this surface by extracting lines in each frame. We demonstrate our approach with silhouette lines. Then, we locate visibility events that would create discontinuities and propagate them through time. They decompose the surface into charts with a disc topology. We parameterize each chart via a least-squares approach that reflects the specific requirements of line drawing. This step results in a texture atlas of the space-time surface which defines the parameterization for each line. We show that by adjusting a few weights in the least-squares energy, the artist can obtain an artifact-free animated motion in a variety of typical non-photorealistic styles such as painterly strokes and technical line drawing.
我们描述了一种在动画线条绘制的背景下,参数化动画3D模型生成的线条的方法。卡通和机械插图是非真实感绘画的流行主题,通常由3D模型生成。为线条添加纹理,例如描绘笔触或虚线,可以增强表现力,例如区分可见线和隐藏线。然而,动态可见性事件和不断演变的线条形状提出了迄今为止只部分探索的问题。在本文中,我们假设整个3D动画是提前已知的,就像特征动画和离线渲染的典型情况一样。我们的方法的核心是一个几何公式的问题,作为一个参数化的时空表面在动画期间被二维线扫过。首先,我们通过在每帧中提取线条来构建这个表面。我们用轮廓线来展示我们的方法。然后,我们定位会产生不连续的可见性事件,并通过时间传播它们。它们将表面分解成带有圆盘拓扑结构的图表。我们通过反映线条绘制的特定要求的最小二乘方法来参数化每个图表。这一步产生了时空表面的纹理图集,它定义了每条线的参数化。我们表明,通过调整最小二乘能量中的几个权重,艺术家可以在各种典型的非真实感风格(如绘画笔触和技术线条绘制)中获得无伪像的动画运动。
{"title":"Spatio-temporal analysis for parameterizing animated lines","authors":"Bert Buchholz, Noura Faraj, Sylvain Paris, E. Eisemann, T. Boubekeur","doi":"10.1145/2024676.2024690","DOIUrl":"https://doi.org/10.1145/2024676.2024690","url":null,"abstract":"We describe a method to parameterize lines generated from animated 3D models in the context of animated line drawings. Cartoons and mechanical illustrations are popular subjects of non-photorealistic drawings and are often generated from 3D models. Adding texture to the lines, for instance to depict brush strokes or dashed lines, enables greater expressiveness, e.g. to distinguish between visible and hidden lines. However, dynamic visibility events and the evolving shape of the lines raise issues that have been only partially explored so far. In this paper, we assume that the entire 3D animation is known ahead of time, as is typically the case for feature animations and off-line rendering. At the core of our method is a geometric formulation of the problem as a parameterization of the space-time surface swept by a 2D line during the animation. First, we build this surface by extracting lines in each frame. We demonstrate our approach with silhouette lines. Then, we locate visibility events that would create discontinuities and propagate them through time. They decompose the surface into charts with a disc topology. We parameterize each chart via a least-squares approach that reflects the specific requirements of line drawing. This step results in a texture atlas of the space-time surface which defines the parameterization for each line. We show that by adjusting a few weights in the least-squares energy, the artist can obtain an artifact-free animated motion in a variety of typical non-photorealistic styles such as painterly strokes and technical line drawing.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114191689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Animation for ancient tile mosaics 动画古代瓷砖马赛克
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024701
Dongwann Kang, Yong-Jin Ohn, M. Han, K. Yoon
In mosaic art, tiles of unique color, material, and shape are arranged on a plane to form patterns and shapes. Although previous research has been carried out on creating static mosaic-like images from non-mosaic input, mosaic animation requires a method to maintain the temporal coherence of tiles. Here we introduce a method that creates mosaic animations from videos by applying a temporally and spatially coherent tile-arrangement technique. We extract coherent feature lines from video input using video segmentation, and arrange tiles based on the feature lines. We then animate tiles along the motion of video, add and delete tiles to preserve the tile density, and smooth tile color via frames.
在马赛克艺术中,独特的颜色、材料和形状的瓷砖排列在一个平面上,形成图案和形状。虽然以前的研究已经从非马赛克输入中创建静态的类似马赛克的图像,但马赛克动画需要一种保持瓷砖时间一致性的方法。在这里,我们介绍了一种方法,通过应用时间和空间上连贯的瓷砖排列技术,从视频中创建马赛克动画。我们利用视频分割的方法从视频输入中提取连贯的特征线,并根据特征线排列贴片。然后,我们沿着视频的运动动画贴图,添加和删除贴图以保持贴图的密度,并通过帧平滑贴图的颜色。
{"title":"Animation for ancient tile mosaics","authors":"Dongwann Kang, Yong-Jin Ohn, M. Han, K. Yoon","doi":"10.1145/2024676.2024701","DOIUrl":"https://doi.org/10.1145/2024676.2024701","url":null,"abstract":"In mosaic art, tiles of unique color, material, and shape are arranged on a plane to form patterns and shapes. Although previous research has been carried out on creating static mosaic-like images from non-mosaic input, mosaic animation requires a method to maintain the temporal coherence of tiles. Here we introduce a method that creates mosaic animations from videos by applying a temporally and spatially coherent tile-arrangement technique. We extract coherent feature lines from video input using video segmentation, and arrange tiles based on the feature lines. We then animate tiles along the motion of video, add and delete tiles to preserve the tile density, and smooth tile color via frames.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120996322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Snaxels on a plane Snaxels在飞机上
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024683
Kevin Karsch, J. Hart
While many algorithms exist for tracing various contours for illustrating a meshed object, few algorithms organize these contours into region-bounding closed loops. Tracing closed-loop boundaries on a mesh can be problematic due to switchbacks caused by subtle surface variation, and the organization of these regions into a planar map can lead to many small region components due to imprecision and noise. This paper adapts "snaxels," an energy minimizing active contour method designed for robust mesh processing, and repurposes it to generate visual, shadow and shading contours, and a simplified visual-surface planar map, useful for stylized vector art illustration of the mesh. The snaxel active contours can also track contours as the mesh animates, and frame-to-frame correspondences between snaxels lead to a new method to convert the moving contours on a 3-D animated mesh into 2-D SVG curve animations for efficient embedding in Flash, PowerPoint and other dynamic vector art platforms.
虽然已有许多算法用于跟踪各种轮廓以说明网格对象,但很少有算法将这些轮廓组织成区域边界闭环。由于细微的表面变化引起的切换,在网格上跟踪闭环边界可能会有问题,并且由于不精确和噪声,将这些区域组织成平面图可能会导致许多小区域组件。本文采用了“snaxels”,一种为鲁棒网格处理而设计的能量最小化活动轮廓方法,并将其重新用于生成视觉,阴影和阴影轮廓,以及简化的视觉表面平面图,用于程式化的矢量艺术网格插图。snaxel活动轮廓也可以在网格动画时跟踪轮廓,并且snaxel之间的帧与帧之间的对应关系导致了一种将三维动画网格上的运动轮廓转换为二维SVG曲线动画的新方法,以便有效地嵌入Flash, PowerPoint和其他动态矢量艺术平台。
{"title":"Snaxels on a plane","authors":"Kevin Karsch, J. Hart","doi":"10.1145/2024676.2024683","DOIUrl":"https://doi.org/10.1145/2024676.2024683","url":null,"abstract":"While many algorithms exist for tracing various contours for illustrating a meshed object, few algorithms organize these contours into region-bounding closed loops. Tracing closed-loop boundaries on a mesh can be problematic due to switchbacks caused by subtle surface variation, and the organization of these regions into a planar map can lead to many small region components due to imprecision and noise. This paper adapts \"snaxels,\" an energy minimizing active contour method designed for robust mesh processing, and repurposes it to generate visual, shadow and shading contours, and a simplified visual-surface planar map, useful for stylized vector art illustration of the mesh. The snaxel active contours can also track contours as the mesh animates, and frame-to-frame correspondences between snaxels lead to a new method to convert the moving contours on a 3-D animated mesh into 2-D SVG curve animations for efficient embedding in Flash, PowerPoint and other dynamic vector art platforms.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127055808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Image and video abstraction by multi-scale anisotropic Kuwahara filtering 基于多尺度各向异性Kuwahara滤波的图像和视频提取
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024686
J. Kyprianidis
The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU.
各向异性Kuwahara滤波器是一种边缘保持滤波器,对于从图像或视频中创建风格化抽象特别有用。它是基于Kuwahara滤波器的推广,适应图像特征的局部结构。本文讨论了各向异性Kuwahara滤波器的两个局限性。首先,通过在扇区加权项计算中加入阈值,可以避免伪影,并在噪声破坏区域获得平滑结果。其次,提出了一种多尺度计算方案,在低通滤波金字塔上同时传播局部方向估计和滤波结果。这允许一个强大的抽象效果,并避免在大的低对比度区域的伪影。传播由计算过程中产生的局部方差和各向异性控制,没有额外的开销,从而形成了一个特别适合GPU实时处理的高效方案。
{"title":"Image and video abstraction by multi-scale anisotropic Kuwahara filtering","authors":"J. Kyprianidis","doi":"10.1145/2024676.2024686","DOIUrl":"https://doi.org/10.1145/2024676.2024686","url":null,"abstract":"The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local structure of image features. In this work, two limitations of the anisotropic Kuwahara filter are addressed. First, it is shown that by adding thresholding to the weighting term computation of the sectors, artifacts are avoided and smooth results in noise-corrupted regions are achieved. Second, a multi-scale computation scheme is proposed that simultaneously propagates local orientation estimates and filtering results up a low-pass filtered pyramid. This allows for a strong abstraction effect and avoids artifacts in large low-contrast regions. The propagation is controlled by the local variances and anisotropies that are derived during the computation without extra overhead, resulting in a highly efficient scheme that is particularly suitable for real-time processing on a GPU.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129393409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Portrait painting using active templates 肖像绘画使用活动模板
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024696
Mingtian Zhao, Song-Chun Zhu
Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.
肖像在传统绘画中占有重要地位,但在绘画渲染研究中却没有得到深入的研究。绘制人像的困难是由于我们对人脸结构的敏锐的视觉感知。为了达到令人满意的效果,肖像绘制算法应该考虑到面部结构。在本文中,我们提出了一种基于示例的方法,通过转移艺术家先前绘制的肖像模板的笔触,从照片中渲染肖像绘画。这些笔触不仅包含了丰富的面部结构信息,还包含了艺术家如何用大而果断的笔触和鲜艳的色彩来描绘面部结构。使用不同类型人脸的肖像绘画模板字典,我们表明该方法可以产生令人满意的结果。
{"title":"Portrait painting using active templates","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/2024676.2024696","DOIUrl":"https://doi.org/10.1145/2024676.2024696","url":null,"abstract":"Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134298611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Towards automatic concept transfer 迈向自动概念转移
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024703
Naila Murray, S. Skaff, L. Marchesotti, F. Perronnin
This paper introduces a novel approach to automatic concept transfer; examples of concepts are "romantic", "earthy", and "luscious". The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
本文介绍了一种自动概念迁移的新方法;概念的例子有“浪漫”、“朴实”和“甜美”。该方法仅在给定用户用自然语言指定的概念的情况下修改输入图像的颜色内容,从而需要最少的用户输入。这种方法对于那些知道他们希望在传输的图像中传达的信息,而不确定实现相应传输所需的颜色组合的用户特别有用。用户可以通过一个参数来调整概念转移的强度等级。该方法采用凸聚类算法,采用新颖的剪枝机制,自动设置色内容模型的复杂度。它还使用“推动者距离”来计算输入图像模型与目标颜色概念之间的映射。结果表明,我们的方法产生的传输图像有效地代表了概念,正如用户研究所证实的那样。
{"title":"Towards automatic concept transfer","authors":"Naila Murray, S. Skaff, L. Marchesotti, F. Perronnin","doi":"10.1145/2024676.2024703","DOIUrl":"https://doi.org/10.1145/2024676.2024703","url":null,"abstract":"This paper introduces a novel approach to automatic concept transfer; examples of concepts are \"romantic\", \"earthy\", and \"luscious\". The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
XDoG: advanced image stylization with eXtended Difference-of-Gaussians XDoG:扩展高斯差分的高级图像样式化
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024700
H. Winnemöller
Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.
最近对标准高斯差分(DoG)边缘检测算子的扩展使其不易受噪声影响,并增加了其对风格描述应用的美学吸引力。尽管有这些进步,但DoG操作符的技术微妙之处和风格潜力经常被忽视。本文回顾了DoG算子,包括最近的改进,并提供了许多新的结果,涵盖了各种风格,包括铅笔阴影、粉彩、阴影和二元黑白图像。此外,我们展示了一系列微妙的艺术效果,如鬼影、速度线、负边、指示和抽象,我们解释了如何在没有或仅对扩展的DoG公式进行轻微修改的情况下获得所有这些效果。在所有情况下,扩展DoG操作符实现的视觉质量与专用于单一样式的系统相当或更好。
{"title":"XDoG: advanced image stylization with eXtended Difference-of-Gaussians","authors":"H. Winnemöller","doi":"10.1145/2024676.2024700","DOIUrl":"https://doi.org/10.1145/2024676.2024700","url":null,"abstract":"Recent extensions to the standard Difference-of-Gaussians (DoG) edge detection operator have rendered it less susceptible to noise and increased its aesthetic appeal for stylistic depiction applications. Despite these advances, the technical subtleties and stylistic potential of the DoG operator are often overlooked. This paper reviews the DoG operator, including recent improvements, and offers many new results spanning a variety of styles, including pencil-shading, pastel, hatching, and binary black-and-white images. Additionally, we demonstrate a range of subtle artistic effects, such as ghosting, speed-lines, negative edges, indication, and abstraction, and we explain how all of these are obtained without, or only with slight modifications to an extended DoG formulation. In all cases, the visual quality achieved by the extended DoG operator is comparable to or better than those of systems dedicated to a single style.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134559197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Stylization-based ray prioritization for guaranteed frame rates 基于风格化的光线优先级保证帧率
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024685
Bernhard Kainz, M. Steinberger, Stefan Hauswiesner, Rostislav Khlebnikov, D. Schmalstieg
This paper presents a new method to control graceful scene degradation in complex ray-based rendering environments. It proposes to constrain the image sampling density with object features, which are known to support the comprehension of the three-dimensional shape. The presented method uses Non-Photorealistic Rendering (NPR) techniques to extract features such as silhouettes, suggestive contours, suggestive highlights, ridges and valleys. To map different feature types to sampling densities, we also present an evaluation of the features impact on the resulting image quality. To reconstruct the image from sparse sampling data, we use linear interpolation on an adaptively aligned fractal pattern. With this technique, we are able to present an algorithm that guarantees a desired minimal frame rate without much loss of image quality. Our scheduling algorithm maximizes the use of each given time slice by rendering features in order of their corresponding importance values until a time constraint is reached. We demonstrate how our method can be used to boost and guarantee the rendering time in complex ray-based environments consisting of geometric as well as volumetric data.
本文提出了一种控制复杂光线渲染环境下优美场景退化的新方法。它提出用物体特征约束图像采样密度,已知这些特征支持对三维形状的理解。所提出的方法使用非真实感渲染(NPR)技术来提取轮廓、暗示性轮廓、暗示性高光、山脊和山谷等特征。为了将不同的特征类型映射到采样密度,我们还对特征对最终图像质量的影响进行了评估。为了从稀疏采样数据中重建图像,我们在自适应对齐的分形模式上使用线性插值。利用这种技术,我们能够提出一种算法,保证所需的最小帧率,而不会损失太多的图像质量。我们的调度算法通过按照其相应的重要值的顺序呈现特征,直到达到时间限制,从而最大化地利用每个给定的时间片。我们演示了如何使用我们的方法来提高和保证由几何和体积数据组成的复杂的基于光线的环境中的渲染时间。
{"title":"Stylization-based ray prioritization for guaranteed frame rates","authors":"Bernhard Kainz, M. Steinberger, Stefan Hauswiesner, Rostislav Khlebnikov, D. Schmalstieg","doi":"10.1145/2024676.2024685","DOIUrl":"https://doi.org/10.1145/2024676.2024685","url":null,"abstract":"This paper presents a new method to control graceful scene degradation in complex ray-based rendering environments. It proposes to constrain the image sampling density with object features, which are known to support the comprehension of the three-dimensional shape. The presented method uses Non-Photorealistic Rendering (NPR) techniques to extract features such as silhouettes, suggestive contours, suggestive highlights, ridges and valleys. To map different feature types to sampling densities, we also present an evaluation of the features impact on the resulting image quality. To reconstruct the image from sparse sampling data, we use linear interpolation on an adaptively aligned fractal pattern. With this technique, we are able to present an algorithm that guarantees a desired minimal frame rate without much loss of image quality. Our scheduling algorithm maximizes the use of each given time slice by rendering features in order of their corresponding importance values until a time constraint is reached. We demonstrate how our method can be used to boost and guarantee the rendering time in complex ray-based environments consisting of geometric as well as volumetric data.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133957019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Temporal noise control for sketchy animation 粗略动画的时间噪声控制
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024691
Gioacchino Noris, D. Sýkora, Stelian Coros, B. Whited, Maryann Simmons, A. Sorkine-Hornung, M. Gross, R. Sumner
We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced-noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter, e.g. to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.
我们提出了一种控制粗略动画中存在的时间噪声的技术。给定数字绘制的输入动画,我们的方法通过结合运动提取和中间技术来生成注册到输入动画的低噪粗略动画。然后用一个连续的参数值来控制噪声的量。我们的方法可以有效地将草图序列中的时间噪声降低到所需的比率,同时保持每帧草图风格的几何丰富性。这提供了对时间噪声的操纵作为一个额外的艺术参数,例如强调角色情感和场景氛围,并通过制作具有舒适噪声水平的动画来向更广泛的观众展示粗略的内容。我们在一系列粗糙的手绘动画上展示了我们的方法的有效性。
{"title":"Temporal noise control for sketchy animation","authors":"Gioacchino Noris, D. Sýkora, Stelian Coros, B. Whited, Maryann Simmons, A. Sorkine-Hornung, M. Gross, R. Sumner","doi":"10.1145/2024676.2024691","DOIUrl":"https://doi.org/10.1145/2024676.2024691","url":null,"abstract":"We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced-noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter, e.g. to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123000524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
International Symposium on Non-Photorealistic Animation and Rendering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1