首页 > 最新文献

International Symposium on Non-Photorealistic Animation and Rendering最新文献

英文 中文
Example-based brushes for coherent stylized renderings 基于示例的画笔,用于连贯的风格化渲染
Pub Date : 2017-07-29 DOI: 10.1145/3092919.3092929
Ming Zheng, Antoine Milliez, M. Gross, R. Sumner
Painterly stylization is the cornerstone of non-photorealistic rendering. Inspired by the versatility of paint as a physical medium, existing methods target intuitive interfaces that mimic physical brushes, providing artists the ability to intuitively place paint strokes in a digital scene. Other work focuses on physical simulation of the interaction between paint and paper or realistic rendering of wet and dry paint. In our work, we leverage the versatility of example-based methods that can generate paint strokes of arbitrary shape and style based on a collection of images acquired from physical media. Such ideas have gained popularity since they do not require cumbersome physical simulation and achieve high fidelity without the need of a specific model or rule set. However, existing methods are limited to the generation of static 2D paintings and cannot be applied in the context of 3D painting and animation where paint strokes change shape and length as the camera viewport moves. Our method targets this shortcoming by generating temporally-coherent example-based paint strokes that accommodate to such length and shape changes. We demonstrate the robustness of our method with a 2D painting application that provides immediate feedback to the user and show how our brush model can be applied to the screen-space rendering of 3D paintings on a variety of examples.
绘画风格是非真实感渲染的基石。受到油漆作为物理介质的多功能性的启发,现有方法的目标是模仿物理画笔的直观界面,为艺术家提供直观地在数字场景中放置油漆笔触的能力。其他工作侧重于油漆和纸张之间相互作用的物理模拟或湿和干油漆的现实渲染。在我们的工作中,我们利用基于示例的方法的多功能性,可以根据从物理媒体获取的图像集合生成任意形状和风格的绘画笔触。这种想法之所以受到欢迎,是因为它们不需要繁琐的物理模拟,而且不需要特定的模型或规则集就能实现高保真度。然而,现有的方法仅限于生成静态的2D绘画,不能应用于3D绘画和动画的背景下,其中绘画笔触随着相机视口的移动而改变形状和长度。我们的方法通过生成适应这种长度和形状变化的时间连贯的基于示例的油漆笔画来针对这一缺点。我们用2D绘画应用程序演示了我们方法的鲁棒性,该应用程序向用户提供即时反馈,并展示了我们的画笔模型如何应用于各种示例上的3D绘画的屏幕空间渲染。
{"title":"Example-based brushes for coherent stylized renderings","authors":"Ming Zheng, Antoine Milliez, M. Gross, R. Sumner","doi":"10.1145/3092919.3092929","DOIUrl":"https://doi.org/10.1145/3092919.3092929","url":null,"abstract":"Painterly stylization is the cornerstone of non-photorealistic rendering. Inspired by the versatility of paint as a physical medium, existing methods target intuitive interfaces that mimic physical brushes, providing artists the ability to intuitively place paint strokes in a digital scene. Other work focuses on physical simulation of the interaction between paint and paper or realistic rendering of wet and dry paint. In our work, we leverage the versatility of example-based methods that can generate paint strokes of arbitrary shape and style based on a collection of images acquired from physical media. Such ideas have gained popularity since they do not require cumbersome physical simulation and achieve high fidelity without the need of a specific model or rule set. However, existing methods are limited to the generation of static 2D paintings and cannot be applied in the context of 3D painting and animation where paint strokes change shape and length as the camera viewport moves. Our method targets this shortcoming by generating temporally-coherent example-based paint strokes that accommodate to such length and shape changes. We demonstrate the robustness of our method with a 2D painting application that provides immediate feedback to the user and show how our brush model can be applied to the screen-space rendering of 3D paintings on a variety of examples.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129634433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Texture-aware ASCII art synthesis with proportional fonts 纹理感知ASCII艺术合成与比例字体
Pub Date : 2015-06-20 DOI: 10.2312/EXP.20151191
Xuemiao Xu, Linyuan Zhong, M. Xie, Jing Qin, Yilan Chen, Qiang Jin, T. Wong, Guoqiang Han
We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.
我们提出了一种快速的基于结构的ASCII艺术生成方法,该方法接受任意图像(真实照片或手绘)作为输入。我们的方法不仅支持固定宽度字体,还支持视觉上更舒适、计算上更具挑战性的比例字体,这使得我们能够用字符来表示具有各种结构的具有挑战性的图像。考虑到人的感知能力,提出了一种基于多方向相位一致性模型的特征提取方法。与大多数现有的轮廓检测方法不同,我们的方案不试图尽可能地去除纹理。相反,它旨在忠实地捕捉视觉敏感的特征,包括主要轮廓和纹理结构,同时抑制视觉不敏感的特征,如次要纹理元素和噪声。结合变形容忍图像相似性度量,我们可以生成生动而有意义的ASCII艺术,即使角色形状和位置的选择非常有限。提出了一种基于动态规划的优化方法,同时确定匹配的最佳比例字体字符及其最佳位置。实验结果表明,我们的结果在视觉质量方面优于最先进的方法。
{"title":"Texture-aware ASCII art synthesis with proportional fonts","authors":"Xuemiao Xu, Linyuan Zhong, M. Xie, Jing Qin, Yilan Chen, Qiang Jin, T. Wong, Guoqiang Han","doi":"10.2312/EXP.20151191","DOIUrl":"https://doi.org/10.2312/EXP.20151191","url":null,"abstract":"We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129224143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Semi-automatic digital epigraphy from images with normals 从带有法线的图像中提取半自动数字铭文
Pub Date : 2015-06-20 DOI: 10.2312/EXP.20151182
Sema Berkiten, Xinyi Fan, S. Rusinkiewicz
We present a semi-automated system for converting photometric datasets (RGB images with normals) into geometry-aware non-photorealistic illustrations that obey the common conventions of epigraphy (black-and-white archaeological drawings of inscriptions). We focus on rock inscriptions formed by carving into or pecking out the rock surface: these are characteristically rough with shallow relief, making the problem very challenging for previous line drawing methods. Our system allows the user to easily outline the inscriptions on the rock surface, then segment out the inscriptions and create line drawings and shaded renderings in a variety of styles. We explore both constant-width and tilt-indicating lines, as well as locally shape-revealing shading. Our system produces more understandable illustrations than previous NPR techniques, successfully converting epigraphy from a manual and painstaking process into a user-guided semi-automatic process.
我们提出了一个半自动系统,用于将光度数据集(带法线的RGB图像)转换为几何感知的非真实感插图,这些插图遵循金石学的共同惯例(碑文的黑白考古图纸)。我们专注于通过雕刻或啄出岩石表面形成的岩石铭文:这些特征是粗糙的浅浮雕,使问题对以前的线条绘制方法非常具有挑战性。我们的系统允许用户轻松地勾勒岩石表面的铭文,然后分割出铭文,并创建各种风格的线条图和阴影效果图。我们探索了恒定宽度和倾斜指示线,以及局部形状显示的阴影。我们的系统比以前的NPR技术产生更容易理解的插图,成功地将铭文从手工和艰苦的过程转化为用户引导的半自动过程。
{"title":"Semi-automatic digital epigraphy from images with normals","authors":"Sema Berkiten, Xinyi Fan, S. Rusinkiewicz","doi":"10.2312/EXP.20151182","DOIUrl":"https://doi.org/10.2312/EXP.20151182","url":null,"abstract":"We present a semi-automated system for converting photometric datasets (RGB images with normals) into geometry-aware non-photorealistic illustrations that obey the common conventions of epigraphy (black-and-white archaeological drawings of inscriptions). We focus on rock inscriptions formed by carving into or pecking out the rock surface: these are characteristically rough with shallow relief, making the problem very challenging for previous line drawing methods. Our system allows the user to easily outline the inscriptions on the rock surface, then segment out the inscriptions and create line drawings and shaded renderings in a variety of styles. We explore both constant-width and tilt-indicating lines, as well as locally shape-revealing shading. Our system produces more understandable illustrations than previous NPR techniques, successfully converting epigraphy from a manual and painstaking process into a user-guided semi-automatic process.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Markov pen: online synthesis of free-hand drawing styles 马尔科夫笔:在线合成的手绘风格
Pub Date : 2015-06-20 DOI: 10.2312/EXP.20151193
Katrin Lang, M. Alexa
Learning expressive curve styles from example is crucial for interactive or computer-based narrative illustrations. We propose a method for online synthesis of free-hand drawing styles along arbitrary base paths by means of an autoregressive Markov Model. Choice on further curve progression is made while drawing, by sampling from a series of previously learned feature distributions subject to local curvature. The algorithm requires no user-adjustable parameters other than one short example style. It may be used as a custom "random brush" designer in any task that requires rapid placement of a large number of detail-rich shapes that are tedious to create manually.
从例子中学习表达曲线风格对于交互式或基于计算机的叙事插图至关重要。我们提出了一种利用自回归马尔可夫模型沿任意基准路径在线合成手绘样式的方法。在绘制时,通过从一系列先前学习到的受局部曲率约束的特征分布中采样,来选择进一步的曲线级数。除了一个简短的示例样式外,该算法不需要用户可调整的参数。它可以用作自定义“随机刷”设计器,用于任何需要快速放置大量细节丰富的形状的任务,这些形状手动创建非常繁琐。
{"title":"The Markov pen: online synthesis of free-hand drawing styles","authors":"Katrin Lang, M. Alexa","doi":"10.2312/EXP.20151193","DOIUrl":"https://doi.org/10.2312/EXP.20151193","url":null,"abstract":"Learning expressive curve styles from example is crucial for interactive or computer-based narrative illustrations. We propose a method for online synthesis of free-hand drawing styles along arbitrary base paths by means of an autoregressive Markov Model. Choice on further curve progression is made while drawing, by sampling from a series of previously learned feature distributions subject to local curvature. The algorithm requires no user-adjustable parameters other than one short example style. It may be used as a custom \"random brush\" designer in any task that requires rapid placement of a large number of detail-rich shapes that are tedious to create manually.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Hybrid-space localized stylization method for view-dependent lines extracted from 3D models 从三维模型中提取视相关线的混合空间局部化方法
Pub Date : 2015-06-20 DOI: 10.2312/EXP.20151181
L. Cardona, S. Saito
We propose a localized stylization method that combines object-space and image-space techniques to locally stylize view-dependent lines extracted from 3D models. In the input phase, the user can customize a style and draw strokes by tracing over view-dependent feature lines such as occluding contours and suggestive contours. For each stroke drawn, the system stores its style properties as well as its surface location on the underlying polygonal mesh as a data structure referred as registered stroke. In the rendering phase, a new attraction field leads active contours generated from the registered strokes to match current frame feature lines and maintain the style and path coordinates of strokes in nearby viewpoints. For each registered stroke, a limited surface region referred as influence area is used to improve the line matching accuracy and discard obvious mismatches. The proposed stylization system produces uncluttered line drawings that convey additional information such as material properties or feature sharpness and is evaluated by measuring its usability and performance.
我们提出了一种局部风格化方法,该方法结合了对象空间和图像空间技术,对从3D模型中提取的视图相关线进行局部风格化。在输入阶段,用户可以自定义样式,并通过跟踪视图相关的特征线(如遮挡轮廓和暗示轮廓)来绘制笔画。对于绘制的每个笔画,系统将其样式属性及其表面位置存储在底层多边形网格上,作为称为注册笔画的数据结构。在渲染阶段,一个新的吸引场引导由注册笔画生成的活动轮廓匹配当前帧特征线,并保持附近视点笔画的样式和路径坐标。对于每个注册笔划,使用有限的表面区域(称为影响区域)来提高线匹配精度并丢弃明显的不匹配。建议的风格化系统产生整洁的线条图,传达额外的信息,如材料属性或特征清晰度,并通过测量其可用性和性能进行评估。
{"title":"Hybrid-space localized stylization method for view-dependent lines extracted from 3D models","authors":"L. Cardona, S. Saito","doi":"10.2312/EXP.20151181","DOIUrl":"https://doi.org/10.2312/EXP.20151181","url":null,"abstract":"We propose a localized stylization method that combines object-space and image-space techniques to locally stylize view-dependent lines extracted from 3D models. In the input phase, the user can customize a style and draw strokes by tracing over view-dependent feature lines such as occluding contours and suggestive contours. For each stroke drawn, the system stores its style properties as well as its surface location on the underlying polygonal mesh as a data structure referred as registered stroke. In the rendering phase, a new attraction field leads active contours generated from the registered strokes to match current frame feature lines and maintain the style and path coordinates of strokes in nearby viewpoints. For each registered stroke, a limited surface region referred as influence area is used to improve the line matching accuracy and discard obvious mismatches. The proposed stylization system produces uncluttered line drawings that convey additional information such as material properties or feature sharpness and is evaluated by measuring its usability and performance.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"36 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132692945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Hierarchical motion brushes for animation instancing 用于动画实例化的分层运动刷
Pub Date : 2014-08-08 DOI: 10.1145/2630397.2630402
Antoine Milliez, Gioacchino Noris, Ilya Baran, Stelian Coros, Marie-Paule Cani, Maurizio Nitti, A. Marra, M. Gross, R. Sumner
Our work on "motion brushes" provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface. Because motion brushes can encompass all the richness of detail and movement offered by animation software, they accommodate complex, varied effects that are not easily created by other means. To support reuse and provide an effective means for managing complexity, we propose a hierarchical representation that allows simple brushes to be combined into more complex ones. Our system provides stroke-based control over motion-brush parameters, including tools to effectively manage the temporal nature of the motion brush instances. We demonstrate the flexibility and richness of our system with motion brushes for splashing rain, footsteps appearing in the snow, and stylized visual effects.
我们在“运动刷”上的工作为3D动画的创建和重用提供了一个新的工作流程,重点是程式化的运动和描绘。从概念上讲,运动刷扩展了现有的笔刷模型,通过合并三维动画内容的层次结构,包括几何形状、外观信息和运动数据,作为使用绘画界面实例化的核心刷原语。因为运动刷可以包含动画软件提供的所有丰富的细节和运动,所以它们可以容纳复杂的、不同的效果,这些效果不容易通过其他方式创建。为了支持重用并提供管理复杂性的有效方法,我们提出了一种分层表示,允许将简单的笔刷组合成更复杂的笔刷。我们的系统提供基于笔画的运动刷参数控制,包括有效管理运动刷实例的时间性质的工具。我们展示了我们的系统的灵活性和丰富性与运动刷飞溅的雨,脚步出现在雪地,和风格化的视觉效果。
{"title":"Hierarchical motion brushes for animation instancing","authors":"Antoine Milliez, Gioacchino Noris, Ilya Baran, Stelian Coros, Marie-Paule Cani, Maurizio Nitti, A. Marra, M. Gross, R. Sumner","doi":"10.1145/2630397.2630402","DOIUrl":"https://doi.org/10.1145/2630397.2630402","url":null,"abstract":"Our work on \"motion brushes\" provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface. Because motion brushes can encompass all the richness of detail and movement offered by animation software, they accommodate complex, varied effects that are not easily created by other means. To support reuse and provide an effective means for managing complexity, we propose a hierarchical representation that allows simple brushes to be combined into more complex ones. Our system provides stroke-based control over motion-brush parameters, including tools to effectively manage the temporal nature of the motion brush instances. We demonstrate the flexibility and richness of our system with motion brushes for splashing rain, footsteps appearing in the snow, and stylized visual effects.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126933437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Modular line-based halftoning via recursive division 通过递归分割的模块化基于线的半色调
Pub Date : 2014-08-08 DOI: 10.1145/2630397.2630403
Abdalla G. M. Ahmed
We present a new approach for stippling by recursively dividing a grayscale image into rectangles with equal amount of ink, then we use the resulting structure to generate novel line-based halftoning techniques. We present four different rendering styles which share the same underlying structure, two of which bear some similarity to Bosch-Kaplan's TSP Art and Inoue-Urahama's MST Halftoning. The technique we present is fast enough for real time interaction, and at least one of the four rendering styles is well-suited for maze construction.
我们提出了一种新的点画方法,通过递归地将灰度图像划分为具有等量墨水的矩形,然后我们使用所得到的结构来生成新的基于线的半色调技术。我们提出了四种不同的渲染风格,它们具有相同的底层结构,其中两种与Bosch-Kaplan的TSP Art和Inoue-Urahama的MST Halftoning有些相似。我们所呈现的技术对于实时交互来说已经足够快了,并且四种渲染风格中至少有一种非常适合迷宫的构建。
{"title":"Modular line-based halftoning via recursive division","authors":"Abdalla G. M. Ahmed","doi":"10.1145/2630397.2630403","DOIUrl":"https://doi.org/10.1145/2630397.2630403","url":null,"abstract":"We present a new approach for stippling by recursively dividing a grayscale image into rectangles with equal amount of ink, then we use the resulting structure to generate novel line-based halftoning techniques. We present four different rendering styles which share the same underlying structure, two of which bear some similarity to Bosch-Kaplan's TSP Art and Inoue-Urahama's MST Halftoning. The technique we present is fast enough for real time interaction, and at least one of the four rendering styles is well-suited for maze construction.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132056909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Painting with triangles 用三角形绘画
Pub Date : 2014-08-08 DOI: 10.1145/2630397.2630399
M. D. Benjamin, S. DiVerdi, Adam Finkelstein
Although vector graphics offer a number of benefits, conventional vector painting programs offer only limited support for the traditional painting metaphor. We propose a new algorithm that translates a user's mouse motion into a triangle mesh representation. This triangle mesh can then be composited onto a canvas containing an existing mesh representation of earlier strokes. This representation allows the algorithm to render solid colors and linear gradients. It also enables painting at any resolution. This paradigm allows artists to create complex, multi-scale drawings with gradients and sharp features while avoiding pixel sampling artifacts.
虽然矢量图形提供了许多好处,但传统的矢量绘画程序只提供了对传统绘画隐喻的有限支持。我们提出了一种新的算法,将用户的鼠标运动转换为三角形网格表示。然后,这个三角形网格可以被合成到一个画布上,该画布包含了先前笔画的现有网格表示。这种表示允许算法呈现纯色和线性渐变。它还可以在任何分辨率下进行绘画。这种模式允许艺术家创建复杂的,具有梯度和尖锐特征的多尺度绘图,同时避免像素采样伪影。
{"title":"Painting with triangles","authors":"M. D. Benjamin, S. DiVerdi, Adam Finkelstein","doi":"10.1145/2630397.2630399","DOIUrl":"https://doi.org/10.1145/2630397.2630399","url":null,"abstract":"Although vector graphics offer a number of benefits, conventional vector painting programs offer only limited support for the traditional painting metaphor. We propose a new algorithm that translates a user's mouse motion into a triangle mesh representation. This triangle mesh can then be composited onto a canvas containing an existing mesh representation of earlier strokes. This representation allows the algorithm to render solid colors and linear gradients. It also enables painting at any resolution. This paradigm allows artists to create complex, multi-scale drawings with gradients and sharp features while avoiding pixel sampling artifacts.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
ChromoStereoscopic rendering for trichromatic displays 用于三色显示的彩色立体渲染
Pub Date : 2014-08-08 DOI: 10.1145/2630397.2630398
Leïla Schemali, E. Eisemann
The chromostereopsis phenomenom leads to a differing depth perception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth®glasses, which induce chromatic aberrations in one eye by refracting light of different wavelengths differently, hereby offsetting the projected position slightly in one eye. Although, it might seem natural to map depth linearly to hue, which was also the basis of previous solutions, we demonstrate that such a mapping reduces the stereoscopic effect when using standard trichromatic displays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.
色立体视现象导致对不同颜色色调的不同深度感知,例如,红色在蓝色之前被感知。在色立体渲染中,产生的二维图像以颜色编码深度。虽然我们人类视觉系统的自然色立体感相当低,但它可以通过ChromaDepth®眼镜来增强,该眼镜通过不同波长的光的不同折射来诱导一只眼睛的色差,从而略微抵消一只眼睛的投影位置。虽然,将深度线性映射到色调似乎很自然,这也是以前解决方案的基础,但我们证明,当使用标准的三色显示器或打印系统时,这种映射会降低立体效果。我们提出了一种算法,它可以在减少伪影的情况下改善立体体验。
{"title":"ChromoStereoscopic rendering for trichromatic displays","authors":"Leïla Schemali, E. Eisemann","doi":"10.1145/2630397.2630398","DOIUrl":"https://doi.org/10.1145/2630397.2630398","url":null,"abstract":"The chromostereopsis phenomenom leads to a differing depth perception of different color hues, e.g., red is perceived slightly in front of blue. In chromostereoscopic rendering 2D images are produced that encode depth in color. While the natural chromostereopsis of our human visual system is rather low, it can be enhanced via ChromaDepth®glasses, which induce chromatic aberrations in one eye by refracting light of different wavelengths differently, hereby offsetting the projected position slightly in one eye. Although, it might seem natural to map depth linearly to hue, which was also the basis of previous solutions, we demonstrate that such a mapping reduces the stereoscopic effect when using standard trichromatic displays or printing systems. We propose an algorithm, which enables an improved stereoscopic experience with reduced artifacts.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124352426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Creating personalized jigsaw puzzles 创建个性化的拼图
Pub Date : 2014-08-08 DOI: 10.1145/2630397.2630405
Cheryl Lau, Yuliy Schwartzburg, Appu Shaji, Zahra Sadeghipoor, S. Süsstrunk
Designing aesthetically pleasing and challenging jigsaw puzzles is considered an art that requires considerable skill and expertise. We propose a tool that allows novice users to create customized jigsaw puzzles based on the image content and a user-defined curve. A popular design choice among puzzle makers, called color line cutting, is to cut the puzzle along the main contours in an image, making the puzzle both aesthetically interesting and challenging to solve. At the same time, the puzzle maker has to make sure that puzzle pieces interlock so that they do not disassemble easily. Our method automatically optimizes for puzzle cuts that follow the main contours in the image and match the user-defined curve. We handle the tradeoff between color line cutting and interlocking, and we introduce a linear formulation for the interlocking constraint. We propose a novel method for eliminating self-intersections and ensuring a minimum width in our output curves. Our method satisfies these necessary fabrication constraints in order to make valid puzzles that can be easily realized with present-day laser cutters.
设计美观且具有挑战性的拼图游戏被认为是一门需要相当技巧和专业知识的艺术。我们提出了一个工具,允许新手用户根据图像内容和用户定义的曲线创建自定义的拼图。在谜题制作者中,有一种流行的设计选择叫做“色线切割”,即沿着图像的主要轮廓切割谜题,使谜题既具有美感又具有挑战性。同时,拼图制作者必须确保拼图片相互连接,这样它们就不会轻易拆卸。我们的方法自动优化谜题切割,遵循图像中的主要轮廓并匹配用户定义的曲线。我们处理了色线切割和联锁之间的权衡,并引入了联锁约束的线性公式。我们提出了一种消除自交并保证输出曲线最小宽度的新方法。我们的方法满足了这些必要的制造约束,以制作有效的拼图,可以很容易地实现与目前的激光切割机。
{"title":"Creating personalized jigsaw puzzles","authors":"Cheryl Lau, Yuliy Schwartzburg, Appu Shaji, Zahra Sadeghipoor, S. Süsstrunk","doi":"10.1145/2630397.2630405","DOIUrl":"https://doi.org/10.1145/2630397.2630405","url":null,"abstract":"Designing aesthetically pleasing and challenging jigsaw puzzles is considered an art that requires considerable skill and expertise. We propose a tool that allows novice users to create customized jigsaw puzzles based on the image content and a user-defined curve. A popular design choice among puzzle makers, called color line cutting, is to cut the puzzle along the main contours in an image, making the puzzle both aesthetically interesting and challenging to solve. At the same time, the puzzle maker has to make sure that puzzle pieces interlock so that they do not disassemble easily.\u0000 Our method automatically optimizes for puzzle cuts that follow the main contours in the image and match the user-defined curve. We handle the tradeoff between color line cutting and interlocking, and we introduce a linear formulation for the interlocking constraint. We propose a novel method for eliminating self-intersections and ensuring a minimum width in our output curves. Our method satisfies these necessary fabrication constraints in order to make valid puzzles that can be easily realized with present-day laser cutters.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"1092 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116041172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
International Symposium on Non-Photorealistic Animation and Rendering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1