首页 > 最新文献

Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval最新文献

英文 中文
Fabric Appearance Benchmark 织物外观基准
S. Merzbach, R. Klein
Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics. CCS Concepts • Computing methodologies → Reflectance modeling; Appearance and texture representations;
外观建模是图形界和视觉界非常关注的一个难题。尽管近年来出现了越来越多的高质量材料数据库,引发了新的研究,但普遍缺乏绩效评估的评估基准和竞争作品之间的公平比较。因此,我们发布了一个新的数据集,并提出了一个公共挑战,这将使标准化评估成为可能。为此,我们用商用外观扫描仪测量了56个织物样品。我们发布校准后的HDR图像,以及基线SVBRDF拟合。挑战在于,在已知的光线和视图采样下,重建未见图像子集的外观。用户提交将自动评估和排名,由一组标准的图像指标。CCS概念•计算方法→反射建模;外观和纹理表示;
{"title":"Fabric Appearance Benchmark","authors":"S. Merzbach, R. Klein","doi":"10.2312/egp.20201035","DOIUrl":"https://doi.org/10.2312/egp.20201035","url":null,"abstract":"Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics. CCS Concepts • Computing methodologies → Reflectance modeling; Appearance and texture representations;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"118 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89398061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
From Perception to Interaction with Virtual Characters 从感知到与虚拟角色的互动
E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell
This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited. 1. Course Description Virtual humans are finding a growing number of applications, such as in social media apps, Spaces by Facebook, Bitmoji and Genies, as well as computer games and human-computer interfaces. Their use today has also extended from the typical on-screen display applications to immersive and collaborative environments (VR/AR/MR). At the same time, we are also witnessing significant improvements in real-time performance, increased visual fidelity of characters and novel devices. The question of how these developments will be received from the user’s point of view, or which aspects of virtual characters influence the user more, has therefore never been so important. This course will provide an overview of existing perceptual studies related to the topic of virtual characters. To make the course easier to follow, we start with a brief overview of human perception and how perceptual studies are conducted in terms of methods and experiment design. With knowledge of the methods, we continue with artistic and technical aspects which influence the design of character appearance (lighting and shading, facial feature placement, stylization, etc.). Important questions on character design will be addressed such as – if I want my character to be highly appealing, should I render with realistic or stylized shading? What facial features make my character appear more trustworthy? Do dark shadows enhance the emotion my character is portraying? We then dive deeper into the movement of the characters, exploring which information is present in the motion cues and how motion can, in combination with character appearance, guide our perception and even be a foundation of biased perception (stereotypes). Some examples of questions that we will address are – if I want my character to appear extroverted, what movement or app
本课程将向学生、研究人员和数字艺术家介绍虚拟人物感知研究的最新成果。它涵盖了构成虚拟角色外观的技术和艺术方面如何影响人类感知,以及如何在与虚拟角色的交互场景中创建合理性错觉。我们将报告研究结果,解决低水平线索的影响,如面部比例,阴影或细节水平和高层次线索,如行为或艺术风格化。我们将强调在角色发展,动画,交互设计和实现视觉和故事叙述之间的一致性中遇到的方面。我们将密切关注语言和非语言互动之间的关系,并介绍一些在虚拟现实中创造令人信服的角色行为的重要概念。我们在本课程中提出的见解将作为一个额外的工具集,以预测某些设计决策的效果,并创造更有说服力的人物,特别是在预算或时间有限的情况下。1. 虚拟人的应用越来越多,比如社交媒体应用、Facebook的Spaces、Bitmoji和Genies,以及电脑游戏和人机界面。如今,它们的使用也从典型的屏幕显示应用扩展到沉浸式和协作环境(VR/AR/MR)。与此同时,我们也见证了实时性能的显著提高,角色的视觉保真度的提高和新设备的出现。因此,如何从用户的角度看待这些发展,或者虚拟角色的哪些方面对用户的影响更大,这些问题从未如此重要。本课程将提供与虚拟人物主题相关的现有感知研究的概述。为了使课程更容易理解,我们首先简要概述人类感知以及如何在方法和实验设计方面进行感知研究。有了这些方法的知识,我们继续从艺术和技术方面影响人物外观的设计(照明和阴影,面部特征的位置,风格化等)。关于角色设计的重要问题将被解决,例如-如果我想让我的角色非常吸引人,我应该渲染现实或风格化的阴影?什么样的面部特征让我的性格看起来更值得信赖?阴影是否能增强我的角色所表现的情感?然后,我们深入研究角色的运动,探索哪些信息存在于运动线索中,以及运动如何与角色外观相结合,指导我们的感知,甚至成为偏见感知(刻板印象)的基础。我们要解决的一些问题的例子是——如果我想让我的角色看起来外向,需要什么样的动作或外表来实现这一点?电子游戏中角色的外表会影响我的道德决定吗?然后我们开始进入虚拟现实领域,以及如何使用它来研究虚拟角色的感知,并探索虚拟角色的外观如何影响我们对他们的同理心水平。我们还讨论了在虚拟现实(VR)中研究感知的可能的行为措施。在最后一节中,我们关注的问题是-我们应该如何设计与虚拟角色的交互,以提高任务表现和更具沉浸感?可信性错觉是VR中的一个重要元素,它使VR体验更加身临其境,引人入胜,并确保在VR中学习的技能可以直接应用于现实生活体验。从对评估可信性错觉的出版物的简要回顾开始,我们将重点关注虚拟角色,社会存在或共同存在的背景。合理性错觉理论认为,与虚拟角色互动的体验应该尽可能接近于与真人面对面的互动。人类面对面的互动是高度多模态的:对话的语言内容被其他非语言信号增强,这些信号携带了大量信息,例如声调、面部表情、手势、凝视和空间行为。与角色的互动涉及到感知人和角色反应的紧密循环。本课程将涵盖感应技术、反应类型和两者之间的映射方法。我们还将讨论语言和非语言互动之间的关系,包括人们在对话中所扮演的不同角色:说、听和其他形式的nonc©2020 the Author(s) Eurographics Proceedings c©2020 the Eurographics Association。https://diglib.eg.org https://www.eg.org DOI: 10.2312/egt.20201001E. Zell, K. Zibrek, X. Pan, M. Gillies & R。 McDonnell /从感知到互动与虚拟角色的口头互动。所有这些问题都将受到社会互动心理学和当前VR技术的影响。我们将使用两个例子来说明VR中虚拟角色互动的设计过程:一个是关于医患交流的培训,另一个是关于我们最近与一家游戏公司合作为《剃刀党》VR游戏创造ai角色的项目。该课程提供了相关研究的概述,使其易于确定在生产和角色发展的实际问题的答案。同时,我们避免对角色和交互设计的问题给出明确的答案,并通过列出未回答的问题来鼓励进一步的调查,以便对所呈现的研究进行批判性评估。最后,参与感知实验是一种多模态体验,不能仅通过实验设计的描述性报告来再现。出于这个原因,我们将选择一些有代表性的实验,并在课程中运行一个高度紧凑的版本,用于说明目的。刺激将显示在投影仪墙上,参与者将能够使用他们的智能手机在短时间内对刺激进行评级。实验将主要选择来介绍一个新的主题。我们充分意识到获得的结果无论如何都不具有代表性,但我们相信这样的现场调查将提高对研究设计的理解,增加参与者的参与度,并在180分钟的演讲中成为一个受欢迎的休息时间。在SIGGRAPH 2019上,我们给出了本教程的较短版本(90分钟),参加人数众多(约100人)。那些不太熟悉性格感知研究的参与者,对这些知识的适用性尤其积极。鉴于积极的反馈,我们扩展了关于交互主题的教程。其他相关教程和课程过去10-15年在SIGGRAPH, SIGGRAPH Asia或Eurographics举办的课程涵盖的主题包括实验设计[CW13],简单3D形状的视觉感知[FS09],以及显示技术和虚拟环境应用中的图形感知[GCL∗06a, TOY∗07,MR08]。其他课程涵盖了低水平刺激感知和图形应用的混合,其中角色感知也部分解决[OHM∗04,MMG11]。最后,还有一些课程侧重于对虚拟角色特定方面的感知;这些研究包括:(i)身体运动的表现力[VGS∗06,HOP09], (ii)人群[BKA∗14,HLLO10,DMTPT09,TOY∗07],(iii)涵盖哲学、心理学和生理学各个方面的情感多学科研究[Ges12],以及(iv)为对话创造可信的角色[JKF∗11]。我们的课程是第一个覆盖感知虚拟人在一个单一的资源,并解决比以前的课程更近期的工作。我们认为非专家也可以使用它,这是进一步研究相关主题的起点。本课程适合想要了解虚拟人物感知研究的最新发展概况和确定开放主题的学生。此外,本课程是专门为研究虚拟人物但不太熟悉感性研究的研究人员和艺术家设计的。
{"title":"From Perception to Interaction with Virtual Characters","authors":"E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell","doi":"10.2312/egt.20201001","DOIUrl":"https://doi.org/10.2312/egt.20201001","url":null,"abstract":"This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited. 1. Course Description Virtual humans are finding a growing number of applications, such as in social media apps, Spaces by Facebook, Bitmoji and Genies, as well as computer games and human-computer interfaces. Their use today has also extended from the typical on-screen display applications to immersive and collaborative environments (VR/AR/MR). At the same time, we are also witnessing significant improvements in real-time performance, increased visual fidelity of characters and novel devices. The question of how these developments will be received from the user’s point of view, or which aspects of virtual characters influence the user more, has therefore never been so important. This course will provide an overview of existing perceptual studies related to the topic of virtual characters. To make the course easier to follow, we start with a brief overview of human perception and how perceptual studies are conducted in terms of methods and experiment design. With knowledge of the methods, we continue with artistic and technical aspects which influence the design of character appearance (lighting and shading, facial feature placement, stylization, etc.). Important questions on character design will be addressed such as – if I want my character to be highly appealing, should I render with realistic or stylized shading? What facial features make my character appear more trustworthy? Do dark shadows enhance the emotion my character is portraying? We then dive deeper into the movement of the characters, exploring which information is present in the motion cues and how motion can, in combination with character appearance, guide our perception and even be a foundation of biased perception (stereotypes). Some examples of questions that we will address are – if I want my character to appear extroverted, what movement or app","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"5-31"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86867025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Procedural 3D Asteroid Surface Detail Synthesis 程序三维小行星表面细节合成
Xizhi Li, René Weller, G. Zachmann
We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details. CCS Concepts • Computing methodologies → Volumetric models;
我们提出了一种新的噪声模型来程序化地在隐式表面上生成体积地形。主要思想是结合一种新的局部控制的3D点噪声(LCSN)来创建宏观结构和3D Gabor噪声来添加微观细节。更具体地说,空间定义的核公式与脉冲分布相结合,使LCSN能够生成任意大小的陨石坑和巨石,而Gabor噪声生成随机高斯细节。相应的元球位置在下面的隐式表面上保持局部性,以避免传统的程序噪声纹理的全局性,这产生了一个基本特征,往往是在基于程序纹理的地形生成器中缺失的。此外,不同的基于噪声的原语通过算子,即混合,替换,或扭曲到复杂的体积地形集成。结果是一个完全隐式的表示,因此具有紧凑性和灵活的用户控制的优点。我们应用我们的方法来生成高质量的小行星网格,具有精细的表面细节。•计算方法→体积模型;
{"title":"Procedural 3D Asteroid Surface Detail Synthesis","authors":"Xizhi Li, René Weller, G. Zachmann","doi":"10.2312/egs.20201020","DOIUrl":"https://doi.org/10.2312/egs.20201020","url":null,"abstract":"We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details. CCS Concepts • Computing methodologies → Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"1 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79184996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Practical Male Hair Aging Model 一个实用的男性头发老化模型
D. Volkmann, M. Walter
The modeling and rendering of hair in Computer Graphics have seen much progress in the last few years. However, modeling and rendering hair aging, visually seen as the loss of pigments, have not attracted the same attention. We introduce in this paper a biologically inspired hair aging system with two main parts: greying of individual hairs, and time evolution of greying over the scalp. The greying of individual hairs is based on current knowledge about melanin loss, whereas the evolution on the scalp is modeled by segmenting the scalp in regions and defining distinct time frames for greying to occur. Our experimental visual results present plausible results despite the relatively simple model. We validate the results by presenting side by side our results with real pictures of men at different ages.
在过去的几年里,头发的建模和渲染在计算机图形学中取得了很大的进步。然而,造型和渲染头发老化,视觉上被视为色素的损失,并没有引起同样的关注。我们在本文中介绍了一个生物启发的头发老化系统,主要包括两个部分:单个头发的变白,以及头皮变白的时间演变。单个头发的变白是基于目前对黑色素损失的了解,而头皮上的进化是通过将头皮分割成区域并定义不同的变白时间框架来建模的。尽管我们的实验模型相对简单,但我们的视觉结果却令人信服。我们通过将我们的结果与不同年龄男性的真实照片并排展示来验证结果。
{"title":"A Practical Male Hair Aging Model","authors":"D. Volkmann, M. Walter","doi":"10.2312/egs.20201017","DOIUrl":"https://doi.org/10.2312/egs.20201017","url":null,"abstract":"The modeling and rendering of hair in Computer Graphics have seen much progress in the last few years. However, modeling and rendering hair aging, visually seen as the loss of pigments, have not attracted the same attention. We introduce in this paper a biologically inspired hair aging system with two main parts: greying of individual hairs, and time evolution of greying over the scalp. The greying of individual hairs is based on current knowledge about melanin loss, whereas the evolution on the scalp is modeled by segmenting the scalp in regions and defining distinct time frames for greying to occur. Our experimental visual results present plausible results despite the relatively simple model. We validate the results by presenting side by side our results with real pictures of men at different ages.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91034635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
First Order Signed Distance Fields 一阶带符号距离域
Róbert Bán, Gábor Valasek
This paper investigates a first order generalization of signed distance fields. We show that we can improve accuracy and storage efficiency by incorporating the spatial derivatives of the signed distance function into the distance field samples. We show that a representation in power basis remains invariant under barycentric combination, as such, it is interpolated exactly by the GPU. Our construction is applicable in any geometric setting where point-surface distances can be queried. To emphasize the practical advantages of this approach, we apply our results to signed distance field generation from triangular meshes. We propose storage optimization approaches and offer a theoretical and empirical accuracy analysis of our proposed distance field type in relation to traditional, zero order distance fields. We show that the proposed representation may offer an order of magnitude improvement in storage while retaining the same precision as a higher resolution distance field. CCS Concepts • Computing methodologies → Ray tracing; Volumetric models;
研究了符号距离域的一阶推广。结果表明,在距离场样本中加入带符号距离函数的空间导数可以提高精度和存储效率。我们证明了在质心组合下幂基表示保持不变,因此,它可以被GPU精确地插值。我们的构造适用于任何可以查询点面距离的几何设置。为了强调这种方法的实际优势,我们将我们的结果应用于三角网格的符号距离场生成。我们提出了存储优化方法,并对我们提出的距离场类型与传统的零阶距离场的关系进行了理论和经验精度分析。我们表明,所提出的表示可以在存储方面提供一个数量级的改进,同时保持与高分辨率距离场相同的精度。CCS概念•计算方法→光线追踪;体积模型;
{"title":"First Order Signed Distance Fields","authors":"Róbert Bán, Gábor Valasek","doi":"10.2312/egs.20201011","DOIUrl":"https://doi.org/10.2312/egs.20201011","url":null,"abstract":"This paper investigates a first order generalization of signed distance fields. We show that we can improve accuracy and storage efficiency by incorporating the spatial derivatives of the signed distance function into the distance field samples. We show that a representation in power basis remains invariant under barycentric combination, as such, it is interpolated exactly by the GPU. Our construction is applicable in any geometric setting where point-surface distances can be queried. To emphasize the practical advantages of this approach, we apply our results to signed distance field generation from triangular meshes. We propose storage optimization approaches and offer a theoretical and empirical accuracy analysis of our proposed distance field type in relation to traditional, zero order distance fields. We show that the proposed representation may offer an order of magnitude improvement in storage while retaining the same precision as a higher resolution distance field. CCS Concepts • Computing methodologies → Ray tracing; Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"12 1","pages":"33-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78223819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils 深眼睛:全自动动画人物着色与绘画的细节对空的瞳孔
Kenta Akita, Yuki Morimoto, R. Tsuruno
Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network. CCS Concepts • Computing methodologies → Image processing; • Applied computing → Fine arts;
最近,许多研究将深度学习应用于线条图的自动上色。然而,使用现有方法很难绘制空瞳孔,因为网络是用具有边缘的瞳孔训练的,这些瞳孔是使用图像处理从彩色图像生成的。大多数实际的线条画都有空的瞳孔,艺术家必须在里面作画。在本文中,我们提出了一种新的网络模型,该模型将参考彩色图像中的瞳孔细节转移到具有空瞳孔的线条图中。我们还提出了一种准确自动上色眼睛的方法。在该方法中,从参考颜色图像中提取眼斑,并使用我们的眼睛位置估计网络自动添加到输入的线条图中作为颜色提示。•计算方法→图像处理;•应用计算机→美术;
{"title":"Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils","authors":"Kenta Akita, Yuki Morimoto, R. Tsuruno","doi":"10.2312/egs.20201023","DOIUrl":"https://doi.org/10.2312/egs.20201023","url":null,"abstract":"Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network. CCS Concepts • Computing methodologies → Image processing; • Applied computing → Fine arts;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"107 2 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85375987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SHREC 2020 Track: Non-rigid Shape Correspondence of Physically-Based Deformations SHREC 2020轨道:基于物理变形的非刚性形状对应
R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang
{"title":"SHREC 2020 Track: Non-rigid Shape Correspondence of Physically-Based Deformations","authors":"R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang","doi":"10.2312/3dor.20201161","DOIUrl":"https://doi.org/10.2312/3dor.20201161","url":null,"abstract":"","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"8 1","pages":"19-26"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72822266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Designing a Course on Non-Photorealistic Rendering 设计一门非真实感渲染课程
Ivaylo Ilinkin
This paper presents a course design on Non-Photorealistic Rendering (NPAR). As a sub-field of computer graphics, NPAR aims to model artistic media, styles, and techniques that capture salient characteristics in images to convey particular information or mood. The results can be just as inspiring as the photorealistic scenes produced with the latest ray-tracing techniques even though the goals are fundamentally different. The paper offers ideas for developing a full course on NPAR by presenting a series of assignments that cover a wide range of NPAR techniques and shares experience on teaching such a course at the junior/senior undergraduate level. CCS Concepts • Computing methodologies → Non-photorealistic rendering;
本文介绍了一种非真实感渲染(NPAR)课程设计。作为计算机图形学的一个子领域,NPAR旨在模拟艺术媒体、风格和技术,捕捉图像中的显著特征,以传达特定的信息或情绪。结果可以像最新的光线追踪技术产生的逼真场景一样鼓舞人心,尽管目标根本不同。本文通过提出一系列涵盖广泛的NPAR技术的作业,为开发一门完整的NPAR课程提供了思路,并分享了在大三/大四本科阶段教授此类课程的经验。•计算方法→非真实感渲染;
{"title":"Designing a Course on Non-Photorealistic Rendering","authors":"Ivaylo Ilinkin","doi":"10.2312/eged.20201028","DOIUrl":"https://doi.org/10.2312/eged.20201028","url":null,"abstract":"This paper presents a course design on Non-Photorealistic Rendering (NPAR). As a sub-field of computer graphics, NPAR aims to model artistic media, styles, and techniques that capture salient characteristics in images to convey particular information or mood. The results can be just as inspiring as the photorealistic scenes produced with the latest ray-tracing techniques even though the goals are fundamentally different. The paper offers ideas for developing a full course on NPAR by presenting a series of assignments that cover a wide range of NPAR techniques and shares experience on teaching such a course at the junior/senior undergraduate level. CCS Concepts • Computing methodologies → Non-photorealistic rendering;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"43 1","pages":"9-16"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73801723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks 基于生成网络的流体模拟频率感知重构
Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler
Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.
卷积神经网络最近被用于从一组简化参数中完全重建流体模拟数据。然而,由于传统上用监督l1损失函数训练的(反)卷积不能区分数据中的低频和高频,因此对于更高的频带,误差不能有效地最小化。这与感知结果的质量直接相关,因为缺少高频细节很容易引起注意。在本文中,我们分析了生成网络的重建质量,并提出了一个频率感知损失函数,该函数能够在训练期间专注于数据集的特定波段。我们表明,我们的方法提高了中频波段流体模拟数据的重建质量,在需要相当训练时间的情况下,产生了更好的感知结果。
{"title":"Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks","authors":"Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler","doi":"10.2312/egs.20201019","DOIUrl":"https://doi.org/10.2312/egs.20201019","url":null,"abstract":"Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"73 1","pages":"65-68"},"PeriodicalIF":0.0,"publicationDate":"2019-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80446032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss 使用对抗损失训练的虚拟试戴图像生成器
Shion Honda
Generating a virtual try-on image from in-shop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Try-on Generative Adversarial Network (VITON-GAN), that generates virtual try-on images using images of in-shop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image (e.g., arms crossed in front of the clothes) by adding an adversarial mechanism in the training pipeline.
从店内服装图像和模特的快照中生成虚拟试穿图像是一项具有挑战性的任务,因为人体和衣服的形状具有很高的灵活性。在本文中,我们开发了一个虚拟试衣生成对抗网络(VITON-GAN),它使用店内服装和模特的图像生成虚拟试衣图像。该方法通过在训练管道中添加对抗机制,增强了模型人物图像中存在遮挡时生成图像的质量(例如,手臂交叉在衣服前面)。
{"title":"VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss","authors":"Shion Honda","doi":"10.2312/egp.20191043","DOIUrl":"https://doi.org/10.2312/egp.20191043","url":null,"abstract":"Generating a virtual try-on image from in-shop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Try-on Generative Adversarial Network (VITON-GAN), that generates virtual try-on images using images of in-shop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image (e.g., arms crossed in front of the clothes) by adding an adversarial mechanism in the training pipeline.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"116 1","pages":"9-10"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74538896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1