首页 > 最新文献

International Symposium on Non-Photorealistic Animation and Rendering最新文献

英文 中文
Chromatic shadows for improved perception 彩色阴影改善感知
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024694
Veronika Soltészová, Daniel Patel, I. Viola
Soft shadows are effective depth and shape cues. However, traditional shadowing algorithms decrease the luminance in shadow areas. The features in shadow become dark and thus shadowing causes information hiding. For this reason, in shadowed areas, medical illustrators decrease the luminance less and compensate the lower luminance range by adding color, i.e., by introducing a chromatic component. This paper presents a novel technique which enables an interactive setup of an illustrative shadow representation for preventing overdarkening of important structures. We introduce a scalar attribute for every voxel denoted as shadowiness and propose a shadow transfer function that maps the shadowiness to a color and a blend factor. Typically, the blend factor increases linearly with the shadowiness. We then let the original object color blend with the shadow color according to the blend factor. We suggest a specific shadow transfer function, designed together with a medical illustrator which shifts the shadow color towards blue. This shadow transfer function is quantitatively evaluated with respect to relative depth and surface perception.
柔和的阴影是有效的深度和形状线索。然而,传统的阴影算法会降低阴影区域的亮度。阴影中的特征变暗,从而导致信息隐藏。因此,在阴影区域,医学插图师较少降低亮度,并通过添加颜色来补偿较低的亮度范围,即通过引入彩色分量。本文提出了一种新的技术,使一个说明性阴影表示的交互式设置,以防止过暗的重要结构。我们为每个体素引入一个标量属性,表示为阴影度,并提出一个阴影传递函数,将阴影映射到颜色和混合因子。通常,混合因子随阴影度线性增加。然后我们让原始物体的颜色与阴影的颜色根据混合系数进行混合。我们建议一个特定的阴影传递功能,与医学插画师一起设计,将阴影颜色转向蓝色。这个阴影传递函数是相对深度和表面感知进行定量评估的。
{"title":"Chromatic shadows for improved perception","authors":"Veronika Soltészová, Daniel Patel, I. Viola","doi":"10.1145/2024676.2024694","DOIUrl":"https://doi.org/10.1145/2024676.2024694","url":null,"abstract":"Soft shadows are effective depth and shape cues. However, traditional shadowing algorithms decrease the luminance in shadow areas. The features in shadow become dark and thus shadowing causes information hiding. For this reason, in shadowed areas, medical illustrators decrease the luminance less and compensate the lower luminance range by adding color, i.e., by introducing a chromatic component. This paper presents a novel technique which enables an interactive setup of an illustrative shadow representation for preventing overdarkening of important structures. We introduce a scalar attribute for every voxel denoted as shadowiness and propose a shadow transfer function that maps the shadowiness to a color and a blend factor. Typically, the blend factor increases linearly with the shadowiness. We then let the original object color blend with the shadow color according to the blend factor. We suggest a specific shadow transfer function, designed together with a medical illustrator which shifts the shadow color towards blue. This shadow transfer function is quantitatively evaluated with respect to relative depth and surface perception.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127695306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Customizing painterly rendering styles using stroke processes 使用笔画过程自定义绘画渲染样式
Pub Date : 2011-08-05 DOI: 10.1145/2024676.2024698
Mingtian Zhao, Song-Chun Zhu
In this paper, we study the stroke placement problem in painterly rendering, and present a solution named stroke processes, which enables intuitive and interactive customization of painting styles by mapping perceptual characteristics to rendering parameters. Using our method, a user can adjust styles (e.g., Fig.1) easily by controlling these intuitive parameters. Our model and algorithm are capable of reflecting various styles in a single framework, which includes point processes and stroke neighborhood graphs to model the spatial layout of brush strokes, and stochastic reaction-diffusion processes to compute the levels and contrasts of their attributes to match desired statistics. We demonstrate the rendering quality and flexibility of this method with extensive experiments.
本文研究了绘画绘制中的笔画位置问题,提出了一种名为笔画过程的解决方案,该方案通过将感知特征映射到绘制参数,实现了绘画风格的直观交互式定制。使用我们的方法,用户可以通过控制这些直观的参数轻松调整样式(例如图1)。我们的模型和算法能够在单一框架中反映各种风格,其中包括点过程和笔画邻域图,用于模拟笔画的空间布局,以及随机反应扩散过程,用于计算其属性的水平和对比,以匹配所需的统计数据。我们通过大量的实验证明了这种方法的渲染质量和灵活性。
{"title":"Customizing painterly rendering styles using stroke processes","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/2024676.2024698","DOIUrl":"https://doi.org/10.1145/2024676.2024698","url":null,"abstract":"In this paper, we study the stroke placement problem in painterly rendering, and present a solution named stroke processes, which enables intuitive and interactive customization of painting styles by mapping perceptual characteristics to rendering parameters. Using our method, a user can adjust styles (e.g., Fig.1) easily by controlling these intuitive parameters. Our model and algorithm are capable of reflecting various styles in a single framework, which includes point processes and stroke neighborhood graphs to model the spatial layout of brush strokes, and stochastic reaction-diffusion processes to compute the levels and contrasts of their attributes to match desired statistics. We demonstrate the rendering quality and flexibility of this method with extensive experiments.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115239370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Painterly animation using video semantics and feature correspondence 使用视频语义和特征对应的绘画动画
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809948
Liang Lin, K. Zeng, Han Lv, Yizhou Wang, Ying-Qing Xu, Song-Chun Zhu
We present an interactive system that stylizes an input video into a painterly animation. The system consists of two phases. The first is an Video Parsing phase that extracts and labels semantic objects with different material properties (skin, hair, cloth, and so on) in the video, and then establishes robust correspondence between frames for discriminative image features inside each object. The second Painterly Rendering phase performs the stylization based on the video semantics and feature correspondence. Compared to the previous work, the proposed method advances painterly animation in three aspects: Firstly, we render artistic painterly styles using a rich set of example-based brush strokes. These strokes, placed in multiple layers and passes, are automatically selected according to the video semantics. Secondly, we warp brush strokes according to global object deformations, so that the strokes appear to be tightly attached to the object surfaces. Thirdly, we propose a series of novel teniques to reduce the scintillation effects. Results applying our system to several video clips show that it produces expressive oil painting animations.
我们提出了一个交互式系统,将输入视频样式化为绘画动画。该系统由两个阶段组成。首先是视频解析阶段,提取和标记视频中具有不同材料属性(皮肤、头发、布料等)的语义对象,然后在每个对象内部的判别图像特征帧之间建立鲁棒对应关系。第二个绘画渲染阶段基于视频语义和特征对应执行风格化。与以往的工作相比,本文提出的方法在三个方面推进了绘画动画:首先,我们使用一组丰富的基于示例的笔触来渲染艺术绘画风格。这些笔画放置在多个图层和通道中,根据视频语义自动选择。其次,我们根据对象的全局变形扭曲笔触,使笔触看起来与对象表面紧密相连。第三,我们提出了一系列新的技术来减少闪烁效应。将该系统应用于几个视频剪辑的结果表明,它可以产生富有表现力的油画动画。
{"title":"Painterly animation using video semantics and feature correspondence","authors":"Liang Lin, K. Zeng, Han Lv, Yizhou Wang, Ying-Qing Xu, Song-Chun Zhu","doi":"10.1145/1809939.1809948","DOIUrl":"https://doi.org/10.1145/1809939.1809948","url":null,"abstract":"We present an interactive system that stylizes an input video into a painterly animation. The system consists of two phases. The first is an Video Parsing phase that extracts and labels semantic objects with different material properties (skin, hair, cloth, and so on) in the video, and then establishes robust correspondence between frames for discriminative image features inside each object. The second Painterly Rendering phase performs the stylization based on the video semantics and feature correspondence. Compared to the previous work, the proposed method advances painterly animation in three aspects: Firstly, we render artistic painterly styles using a rich set of example-based brush strokes. These strokes, placed in multiple layers and passes, are automatically selected according to the video semantics. Secondly, we warp brush strokes according to global object deformations, so that the strokes appear to be tightly attached to the object surfaces. Thirdly, we propose a series of novel teniques to reduce the scintillation effects. Results applying our system to several video clips show that it produces expressive oil painting animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Directional texture transfer 定向纹理转移
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809945
Hochang Lee, Sanghyun Seo, Seung-Tack Ryoo, K. Yoon
A texture transfer algorithm modifies the target image replacing the high frequency information with the example source image. Previous texture transfer techniques normally use such factors as color distance and standard deviation for selecting the best texture from the candidate sets. These factors are useful for expressing a texture effect of the example source in the target image, but are less than optimal for considering the object shape of the target image. In this paper, we propose a novel texture transfer algorithm to express the directional effect based on the flow of the target image. For this, we use a directional factor that considers the gradient direction of the target image. We add an additional energy term that respects the image gradient to the previous fast texture transfer algorithm. Additionally, we propose a method for estimating the directional factor weight value from the target image. We have tested our algorithm with various target images. Our algorithm can express a result image with the feature of the example source texture and the flow of the target image.
一种纹理传输算法对目标图像进行修改,用示例源图像替换高频信息。以前的纹理转移技术通常使用颜色距离和标准偏差等因素从候选集中选择最佳纹理。这些因素对于表达目标图像中示例源的纹理效果是有用的,但是对于考虑目标图像的物体形状来说不是最优的。在本文中,我们提出了一种新的纹理传输算法来表达基于目标图像流动的方向效应。为此,我们使用考虑目标图像梯度方向的方向因子。我们在之前的快速纹理传输算法中增加了一个额外的能量项,该能量项尊重图像梯度。此外,我们还提出了一种从目标图像中估计方向因子权重值的方法。我们用不同的目标图像测试了我们的算法。我们的算法可以表达具有示例源纹理和目标图像流特征的结果图像。
{"title":"Directional texture transfer","authors":"Hochang Lee, Sanghyun Seo, Seung-Tack Ryoo, K. Yoon","doi":"10.1145/1809939.1809945","DOIUrl":"https://doi.org/10.1145/1809939.1809945","url":null,"abstract":"A texture transfer algorithm modifies the target image replacing the high frequency information with the example source image. Previous texture transfer techniques normally use such factors as color distance and standard deviation for selecting the best texture from the candidate sets. These factors are useful for expressing a texture effect of the example source in the target image, but are less than optimal for considering the object shape of the target image.\u0000 In this paper, we propose a novel texture transfer algorithm to express the directional effect based on the flow of the target image. For this, we use a directional factor that considers the gradient direction of the target image. We add an additional energy term that respects the image gradient to the previous fast texture transfer algorithm. Additionally, we propose a method for estimating the directional factor weight value from the target image. We have tested our algorithm with various target images. Our algorithm can express a result image with the feature of the example source texture and the flow of the target image.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125284910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 88
Sisley the abstract painter 抽象画家西斯莱
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809951
Mingtian Zhao, Song-Chun Zhu
We present an interactive abstract painting system named Sisley. Sisley works upon the psychological principle [Berlyne 1971] that abstract arts are often characterized by their greater perceptual ambiguities than photographs, which tend to invoke moderate mental efforts of the audience for interpretation, accompanied with subtle aesthetic pleasures. Given an input photograph, Sisley decomposes it into a hierarchy/tree of its constituent image components (e.g., regions, objects of different categories) with interactive guidance from the user, then automatically generates corresponding abstract painting images, with increased ambiguities of both the scene and individual objects at desired levels. Sisley consists of three major working parts: (1) an interactive image parser executing the tasks of segmentation, labeling, and hierarchical organization, (2) a painterly rendering engine with abstract operators for transferring the image appearance, and (3) a numerical ambiguity computation and control module of servomechanism. With the help of Sisley, even an amateur user can create abstract paintings from photographs easily in minutes. We have evaluated the rendering results of Sisley using human experiments, and verified that they have similar abstract effects to original abstract paintings by artists.
我们提出了一个名为Sisley的交互式抽象绘画系统。Sisley的工作基于心理学原理[Berlyne 1971],即抽象艺术往往比照片具有更大的感知模糊性,这往往会引起观众适度的心理努力来解释,伴随着微妙的审美乐趣。给定一张输入照片,Sisley在用户的交互式指导下将其分解为其组成图像组件(例如,区域,不同类别的对象)的层次结构/树,然后自动生成相应的抽象绘画图像,在所需的级别上增加场景和单个对象的模糊性。Sisley由三个主要工作部分组成:(1)执行分割、标记和分层组织任务的交互式图像解析器;(2)带有抽象操作符的绘画渲染引擎,用于传递图像外观;(3)伺服机构的数值模糊计算和控制模块。在Sisley的帮助下,即使是业余用户也可以在几分钟内轻松地从照片中创建抽象画。我们利用人体实验对Sisley的渲染结果进行了评估,并验证了它们与艺术家的原始抽象绘画具有相似的抽象效果。
{"title":"Sisley the abstract painter","authors":"Mingtian Zhao, Song-Chun Zhu","doi":"10.1145/1809939.1809951","DOIUrl":"https://doi.org/10.1145/1809939.1809951","url":null,"abstract":"We present an interactive abstract painting system named Sisley. Sisley works upon the psychological principle [Berlyne 1971] that abstract arts are often characterized by their greater perceptual ambiguities than photographs, which tend to invoke moderate mental efforts of the audience for interpretation, accompanied with subtle aesthetic pleasures. Given an input photograph, Sisley decomposes it into a hierarchy/tree of its constituent image components (e.g., regions, objects of different categories) with interactive guidance from the user, then automatically generates corresponding abstract painting images, with increased ambiguities of both the scene and individual objects at desired levels. Sisley consists of three major working parts: (1) an interactive image parser executing the tasks of segmentation, labeling, and hierarchical organization, (2) a painterly rendering engine with abstract operators for transferring the image appearance, and (3) a numerical ambiguity computation and control module of servomechanism. With the help of Sisley, even an amateur user can create abstract paintings from photographs easily in minutes. We have evaluated the rendering results of Sisley using human experiments, and verified that they have similar abstract effects to original abstract paintings by artists.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122150436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Compact explosion diagrams 紧凑的爆炸图
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809942
Markus Tatzgern, Denis Kalkofen, D. Schmalstieg
This paper presents a system to automatically generate compact explosion diagrams. Inspired by handmade illustrations, our approach reduces the complexity of an explosion diagram by rendering an exploded view only for a subset of the assemblies of an object. However, the exploded views are chosen so that they allow inference of the remaining unexploded assemblies of the entire 3D model. In particular, our approach demonstrates the assembly of a set of identical groups of parts, by presenting an exploded view only for a single representative. In order to identify the representatives, our system automatically searches for recurring subassemblies. It selects representatives depending on a quality evaluation of their potential exploded view. Our system takes into account visibility information of both the exploded view of a potential representative as well as visibility information of the remaining unexploded assemblies. This allows rendering a balanced compact explosion diagram, consisting of a clear presentation of the exploded representatives as well as the unexploded remaining assemblies. Since representatives may interfere with one another, our system furthermore optimizes combinations of representatives. Throughout this paper we show a number of examples, which have all been rendered from unmodified 3D CAD models.
本文介绍了一个自动生成紧凑爆炸图的系统。受手工插图的启发,我们的方法通过仅为对象组件的子集呈现爆炸视图来降低爆炸图的复杂性。但是,选择爆炸视图是为了允许对整个3D模型的剩余未爆炸组件进行推断。特别地,我们的方法演示了一组相同部件的组装,通过仅为单个代表呈现一个爆炸视图。为了识别代表,我们的系统自动搜索重复出现的子组件。它根据对潜在爆炸视图的质量评估来选择代表。我们的系统考虑了潜在代表的爆炸视图的可见性信息以及剩余未爆炸组件的可见性信息。这允许呈现平衡紧凑的爆炸图,包括爆炸代表的清晰呈现以及未爆炸的剩余组件。由于代表可能相互干扰,我们的系统进一步优化了代表的组合。在本文中,我们展示了一些例子,这些例子都是从未修改的3D CAD模型中渲染出来的。
{"title":"Compact explosion diagrams","authors":"Markus Tatzgern, Denis Kalkofen, D. Schmalstieg","doi":"10.1145/1809939.1809942","DOIUrl":"https://doi.org/10.1145/1809939.1809942","url":null,"abstract":"This paper presents a system to automatically generate compact explosion diagrams. Inspired by handmade illustrations, our approach reduces the complexity of an explosion diagram by rendering an exploded view only for a subset of the assemblies of an object. However, the exploded views are chosen so that they allow inference of the remaining unexploded assemblies of the entire 3D model. In particular, our approach demonstrates the assembly of a set of identical groups of parts, by presenting an exploded view only for a single representative. In order to identify the representatives, our system automatically searches for recurring subassemblies. It selects representatives depending on a quality evaluation of their potential exploded view. Our system takes into account visibility information of both the exploded view of a potential representative as well as visibility information of the remaining unexploded assemblies. This allows rendering a balanced compact explosion diagram, consisting of a clear presentation of the exploded representatives as well as the unexploded remaining assemblies. Since representatives may interfere with one another, our system furthermore optimizes combinations of representatives. Throughout this paper we show a number of examples, which have all been rendered from unmodified 3D CAD models.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Self-similar texture for coherent line stylization 自相似纹理,用于连贯的线条样式化
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809950
Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, Adam Finkelstein
Stylized line rendering for animation has traditionally traded-off between two undesirable artifacts: stroke texture sliding and stroke texture stretching. This paper proposes a new stroke texture representation, the self-similar line artmap (SLAM), which avoids both these artifacts. SLAM textures provide continuous, infinite zoom while maintaining approximately constant appearance in screen-space, and can be produced automatically from a single exemplar. SLAMs can be used as drop-in replacements for conventional stroke textures in 2D illustration and animation. Furthermore, SLAMs enable a new, simple approach to temporally coherent rendering of 3D paths that is suitable for interactive applications. We demonstrate results for 2D and 3D animations.
动画的程式化线条渲染传统上在两种不受欢迎的工件之间进行权衡:笔画纹理滑动和笔画纹理拉伸。本文提出了一种新的笔画纹理表示方法——自相似线图(SLAM),避免了这两种伪影。SLAM纹理提供连续的、无限的缩放,同时在屏幕空间中保持近似恒定的外观,并且可以从单个范例自动生成。slam可以用作2D插图和动画中传统笔画纹理的替代。此外,slam还提供了一种新的、简单的方法来暂时连贯地呈现3D路径,这种方法适用于交互式应用程序。我们演示了2D和3D动画的结果。
{"title":"Self-similar texture for coherent line stylization","authors":"Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, Adam Finkelstein","doi":"10.1145/1809939.1809950","DOIUrl":"https://doi.org/10.1145/1809939.1809950","url":null,"abstract":"Stylized line rendering for animation has traditionally traded-off between two undesirable artifacts: stroke texture sliding and stroke texture stretching. This paper proposes a new stroke texture representation, the self-similar line artmap (SLAM), which avoids both these artifacts. SLAM textures provide continuous, infinite zoom while maintaining approximately constant appearance in screen-space, and can be produced automatically from a single exemplar. SLAMs can be used as drop-in replacements for conventional stroke textures in 2D illustration and animation. Furthermore, SLAMs enable a new, simple approach to temporally coherent rendering of 3D paths that is suitable for interactive applications. We demonstrate results for 2D and 3D animations.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130433198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Stylized depiction of images based on depth perception 基于深度感知的图像的程式化描述
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809952
Jorge López-Moreno, Jorge Jimenez, Sunil Hadap, E. Reinhard, K. Anjyo, D. Gutierrez
Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to creatively relight images for the purpose of generating non-photorealistic renditions that would be difficult to achieve with existing methods. Our realtime implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with four different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study.
最近在图像编辑方面的工作为操纵和增强输入图像开辟了新的可能性。在此背景下,我们利用人类感知的众所周知的特征以及简单的深度近似算法来创造性地重光照图像,以生成现有方法难以实现的非真实感再现。我们在图形硬件上的实时实现允许用户有效地探索每个图像的艺术可能性。我们展示了用四种不同风格产生的结果,证明了我们方法的通用性,并通过用户研究验证了我们的假设和简化。
{"title":"Stylized depiction of images based on depth perception","authors":"Jorge López-Moreno, Jorge Jimenez, Sunil Hadap, E. Reinhard, K. Anjyo, D. Gutierrez","doi":"10.1145/1809939.1809952","DOIUrl":"https://doi.org/10.1145/1809939.1809952","url":null,"abstract":"Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to creatively relight images for the purpose of generating non-photorealistic renditions that would be difficult to achieve with existing methods. Our realtime implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with four different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Example-based stippling using a scale-dependent grayscale process 基于实例的点画使用的规模依赖的灰度过程
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809946
Domingo Martín, G. Arroyo, M. V. Luzón, Tobias Isenberg
We present an example-based approach to synthesizing stipple illustrations for static 2D images that produces scale-dependent results appropriate for an intended spatial output size and resolution. We show how treating stippling as a grayscale process allows us to both produce on-screen output and to achieve stipple merging at medium tonal ranges. At the same time we can also produce images with high spatial and low color resolution for print reproduction. In addition, we discuss how to incorporate high-level illustration considerations into the stippling process based on discussions with and observations of a stipple artist. The implementation of the technique is based on a fast method for distributing dots using halftoning and can be used to create stipple images interactively.
我们提出了一种基于示例的方法来合成静态2D图像的点画插图,该图像产生适合于预期空间输出大小和分辨率的比例相关结果。我们展示了如何处理点画作为一个灰度过程,使我们能够在屏幕上产生输出,并在中等色调范围内实现点画合并。同时,我们还可以产生高空间和低色彩分辨率的图像用于印刷再现。此外,我们讨论如何结合高层次的插图考虑到点画过程中基于与点画艺术家的讨论和观察。该技术的实现是基于一种使用半色调快速分布点的方法,可用于交互式创建点画图像。
{"title":"Example-based stippling using a scale-dependent grayscale process","authors":"Domingo Martín, G. Arroyo, M. V. Luzón, Tobias Isenberg","doi":"10.1145/1809939.1809946","DOIUrl":"https://doi.org/10.1145/1809939.1809946","url":null,"abstract":"We present an example-based approach to synthesizing stipple illustrations for static 2D images that produces scale-dependent results appropriate for an intended spatial output size and resolution. We show how treating stippling as a grayscale process allows us to both produce on-screen output and to achieve stipple merging at medium tonal ranges. At the same time we can also produce images with high spatial and low color resolution for print reproduction. In addition, we discuss how to incorporate high-level illustration considerations into the stippling process based on discussions with and observations of a stipple artist. The implementation of the technique is based on a fast method for distributing dots using halftoning and can be used to create stipple images interactively.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124083176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Video stylization for digital ambient displays of home movies 家庭电影的数字环境显示的视频风格化
Pub Date : 2010-06-07 DOI: 10.1145/1809939.1809955
T. Wang, J. Collomosse, David Slatter, P. Cheatle, D. Greig
Falling hardware costs have prompted an explosion in casual video capture by domestic users. Yet, this video is infrequently accessed post-capture and often lies dormant on users' PCs. We present a system to breathe life into home video repositories, drawing upon artistic stylization to create a "Digital Ambient Display" that automatically selects, stylizes and transitions between videos in a semantically meaningful sequence. We present a novel algorithm based on multi-label graph cut for segmenting video into temporally coherent region maps. These maps are used to both stylize video into cartoons and paintings, and measure visual similarity between frames for smooth sequence transitions. We demonstrate coherent segmentation and stylization over a variety of home videos.
硬件成本的下降促使国内用户的休闲视频拍摄出现了爆炸式增长。然而,这个视频在捕获后很少被访问,并且经常在用户的pc上休眠。我们提出了一个为家庭视频库注入生命的系统,利用艺术风格化来创建一个“数字环境显示”,它可以自动选择,在语义上有意义的序列中对视频进行风格化和过渡。提出了一种基于多标签图切的视频分割算法。这些地图用于将视频风格化为卡通和绘画,并测量帧之间的视觉相似性,以实现平滑的序列过渡。我们展示了连贯的分割和风格化的各种家庭视频。
{"title":"Video stylization for digital ambient displays of home movies","authors":"T. Wang, J. Collomosse, David Slatter, P. Cheatle, D. Greig","doi":"10.1145/1809939.1809955","DOIUrl":"https://doi.org/10.1145/1809939.1809955","url":null,"abstract":"Falling hardware costs have prompted an explosion in casual video capture by domestic users. Yet, this video is infrequently accessed post-capture and often lies dormant on users' PCs. We present a system to breathe life into home video repositories, drawing upon artistic stylization to create a \"Digital Ambient Display\" that automatically selects, stylizes and transitions between videos in a semantically meaningful sequence. We present a novel algorithm based on multi-label graph cut for segmenting video into temporally coherent region maps. These maps are used to both stylize video into cartoons and paintings, and measure visual similarity between frames for smooth sequence transitions. We demonstrate coherent segmentation and stylization over a variety of home videos.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
International Symposium on Non-Photorealistic Animation and Rendering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1