首页 > 最新文献

Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

英文 中文
Aesthetic Enhancement via Color Area and Location Awareness 通过颜色区域和位置意识增强审美
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221247
Bailin Yang, Qingxu Wang, Frederick W. B. Li, Xiaohui Liang, T. Wei, Changrui Zhu
Choosing a suitable color palette can typically improve image aesthetic, where a naive way is choosing harmonious colors from some pre-defined color combinations in color wheels. However, color palettes only consider the usage of color types without specifying their amount in an image. Also, it is still challenging to automatically assign individual palette colors to suitable image regions for maximizing image aesthetic quality. Motivated by these, we propose to construct a contribution-aware color palette from images with high aesthetic quality, enabling color transfer by matching the coloring and regional characteristics of an input image. We hence exploit public image datasets, extracting color composition and embedded color contribution features from aesthetic images to generate our proposed color palettes. We consider both image area ratio and image location as the color contribution features to extract. We have conducted quantitative experiments to demonstrate that our method outperforms existing methods through SSIM (Structural SIMilarity) and PSNR (Peak Signal to Noise Ratio) for objective image quality measurement and no-reference image assessment (NIMA) for image aesthetic scoring.
选择合适的调色板通常可以提高图像的美感,其中一种朴素的方法是从色轮中一些预定义的颜色组合中选择和谐的颜色。然而,调色板只考虑颜色类型的使用,而不指定它们在图像中的数量。此外,自动将单个调色板颜色分配到合适的图像区域以最大限度地提高图像的审美质量仍然具有挑战性。基于此,我们提出从具有高审美质量的图像中构建一个贡献感知的调色板,通过匹配输入图像的颜色和区域特征来实现颜色转移。因此,我们利用公共图像数据集,从美学图像中提取颜色组成和嵌入的颜色贡献特征来生成我们提出的调色板。我们将图像面积比和图像位置作为颜色贡献特征进行提取。我们进行了定量实验,证明我们的方法优于现有的方法,通过SSIM(结构相似度)和PSNR(峰值信噪比)进行客观图像质量测量和无参考图像评估(NIMA)进行图像美学评分。
{"title":"Aesthetic Enhancement via Color Area and Location Awareness","authors":"Bailin Yang, Qingxu Wang, Frederick W. B. Li, Xiaohui Liang, T. Wei, Changrui Zhu","doi":"10.2312/pg.20221247","DOIUrl":"https://doi.org/10.2312/pg.20221247","url":null,"abstract":"Choosing a suitable color palette can typically improve image aesthetic, where a naive way is choosing harmonious colors from some pre-defined color combinations in color wheels. However, color palettes only consider the usage of color types without specifying their amount in an image. Also, it is still challenging to automatically assign individual palette colors to suitable image regions for maximizing image aesthetic quality. Motivated by these, we propose to construct a contribution-aware color palette from images with high aesthetic quality, enabling color transfer by matching the coloring and regional characteristics of an input image. We hence exploit public image datasets, extracting color composition and embedded color contribution features from aesthetic images to generate our proposed color palettes. We consider both image area ratio and image location as the color contribution features to extract. We have conducted quantitative experiments to demonstrate that our method outperforms existing methods through SSIM (Structural SIMilarity) and PSNR (Peak Signal to Noise Ratio) for objective image quality measurement and no-reference image assessment (NIMA) for image aesthetic scoring.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"30 1","pages":"51-56"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79168362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving View Independent Rendering for Multiview Effects 改进多视图效果的视图独立渲染
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221244
Ajinkya Gavane, B. Watson
This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR’s (iVIR’s) soft shadows are nearly identical in quality to VIR’s and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR’s omnidirectional shadow results are still better, often nearly twice as fast as VIR’s, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering. CCS Concepts • Computing methodologies → Rendering; Graphics processors; Point-based models;
本文描述了对视图独立渲染(VIR)的改进,使其对多视图效果更有用。改进后的VIR (iVIR)的软阴影在质量上几乎与VIR相同,并且以相当的速度生成(比多通道渲染快几倍),即使使用更简单的无缓冲实现,也不会有溢出的风险。iVIR的全方位阴影结果仍然更好,即使在没有缓冲的情况下,速度也几乎是VIR的两倍。最令人印象深刻的是,iVIR能够实时绘制复杂的环境,产生高质量的反射,比VIR快了一个数量级,比多通道渲染快了2-4倍。•计算方法→渲染;图形处理器;基于点模型;
{"title":"Improving View Independent Rendering for Multiview Effects","authors":"Ajinkya Gavane, B. Watson","doi":"10.2312/pg.20221244","DOIUrl":"https://doi.org/10.2312/pg.20221244","url":null,"abstract":"This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR’s (iVIR’s) soft shadows are nearly identical in quality to VIR’s and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR’s omnidirectional shadow results are still better, often nearly twice as fast as VIR’s, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering. CCS Concepts • Computing methodologies → Rendering; Graphics processors; Point-based models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"252 1","pages":"35-41"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83494668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Interactive Modeling System of Japanese Castles with Decorative Objects 日本城堡装饰物交互建模系统
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221240
S. Umeyama, Y. Dobashi
We present an interactive modeling system for Japanese castles. We develop an user interface that can generate the fundamental structure of the castle tower consisting of stone walls, turrets, and roofs. By clicking on the screen with a mouse, relevant parameters for the fundamental structure are automatically calculated to generate 3D models of Japanese-style castles. We use characteristic curves that often appear in ancient Japanese architecture for the realistic modeling of the castles.
我们提出了一个交互式的日本城堡建模系统。我们开发了一个用户界面,可以生成城堡塔楼的基本结构,包括石墙、炮塔和屋顶。通过鼠标点击屏幕,自动计算出基本结构的相关参数,生成日式城堡的3D模型。我们使用了日本古代建筑中经常出现的特征曲线来对城堡进行逼真的建模。
{"title":"An Interactive Modeling System of Japanese Castles with Decorative Objects","authors":"S. Umeyama, Y. Dobashi","doi":"10.2312/pg.20221240","DOIUrl":"https://doi.org/10.2312/pg.20221240","url":null,"abstract":"We present an interactive modeling system for Japanese castles. We develop an user interface that can generate the fundamental structure of the castle tower consisting of stone walls, turrets, and roofs. By clicking on the screen with a mouse, relevant parameters for the fundamental structure are automatically calculated to generate 3D models of Japanese-style castles. We use characteristic curves that often appear in ancient Japanese architecture for the realistic modeling of the castles.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"400 1","pages":"15-16"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84846114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison 多变量申请人数据聚合、推理和比较的可视化分析系统
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221248
Yihan Hou, Yu Liu, Heming Wang, Zhichao Zhang, Yue-shan Li, Hai-Ning Liang, Lingyun Yu
People often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants’ materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.
人们经常根据对各种材料的综合理解、对原因的判断、对选择的比较来做决定。例如,当招聘委员会审查多变量申请人数据时,他们需要考虑和比较申请人材料的不同方面。然而,多变量数据的数量和复杂性增加了分析数据,提取最显著信息,然后根据提取的信息快速形成意见的难度。因此,快速和全面地理解多元数据集是商业和教育等许多领域的迫切需要。在这项工作中,我们与利益相关者进行了深入的访谈,并描述了在审查学校申请时涉及数据驱动决策的用户需求。基于这些需求,我们提出了一个可视化分析系统DARC,用于促进多元申请人数据的决策。通过该系统,支持用户对多元数据进行洞察,对所有数据案例进行概览,并快速直观地检索原始数据。通过观察性用户评价和访谈验证了DARC的有效性。
{"title":"DARC: A Visual Analytics System for Multivariate Applicant Data Aggregation, Reasoning and Comparison","authors":"Yihan Hou, Yu Liu, Heming Wang, Zhichao Zhang, Yue-shan Li, Hai-Ning Liang, Lingyun Yu","doi":"10.2312/pg.20221248","DOIUrl":"https://doi.org/10.2312/pg.20221248","url":null,"abstract":"People often make decisions based on their comprehensive understanding of various materials, judgement of reasons, and comparison among choices. For instance, when hiring committees review multivariate applicant data, they need to consider and compare different aspects of the applicants’ materials. However, the amount and complexity of multivariate data increase the difficulty to analyze the data, extract the most salient information, and then rapidly form opinions based on the extracted information. Thus, a fast and comprehensive understanding of multivariate data sets is a pressing need in many fields, such as business and education. In this work, we had in-depth interviews with stakeholders and characterized user requirements involved in data-driven decision making in reviewing school applications. Based on these requirements, we propose DARC, a visual analytics system for facilitating decision making on multivariate applicant data. Through the system, users are supported to gain insights of the multivariate data, picture an overview of all data cases, and retrieve original data in a quick and intuitive manner. The effectiveness of DARC is validated through observational user evaluations and interviews.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"12 4 1","pages":"57-62"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83811422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Content Projection onto a Tunnel from a Moving Subway Train 从移动的地铁列车到隧道的实时内容投影
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211398
Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, Jun-yong Noh
In this study, we present the first actual working system that can project content onto a tunnel wall from a moving subway train so that passengers can enjoy the display of digital content through a train window. To effectively estimate the position of the train in a tunnel, we propose counting sleepers, which are installed at regular interval along the railway, using a distance sensor. The tunnel profile is constructed using pointclouds captured by a depth camera installed next to the projector. The tunnel profile is used to identify projectable sections that will not contain too much interference by possible occluders. The tunnel profile is also used to retrieve the depth at a specific location so that a properly warped content can be projected for viewing by passengers through the window when the train is moving at runtime. Here, we show that the proposed system can operate on an actual train. CCS Concepts • Computing methodologies → Mixed / augmented reality;
在这项研究中,我们展示了第一个实际工作的系统,它可以将移动的地铁列车上的内容投影到隧道墙上,这样乘客就可以通过火车窗口欣赏数字内容的显示。为了有效地估计列车在隧道中的位置,我们建议使用距离传感器计算沿铁路固定间隔安装的枕木。隧道轮廓是使用安装在投影仪旁边的深度相机捕获的点云构建的。隧道剖面图用于识别可投射的部分,这些部分不会包含太多可能遮挡物的干扰。隧道轮廓还用于检索特定位置的深度,以便在列车运行时通过窗户投射适当弯曲的内容,供乘客观看。在这里,我们证明了所提出的系统可以在实际列车上运行。•计算方法→混合/增强现实;
{"title":"Real-time Content Projection onto a Tunnel from a Moving Subway Train","authors":"Jaedong Kim, Haegwang Eom, Jihwan Kim, Younghui Kim, Jun-yong Noh","doi":"10.2312/PG.20211398","DOIUrl":"https://doi.org/10.2312/PG.20211398","url":null,"abstract":"In this study, we present the first actual working system that can project content onto a tunnel wall from a moving subway train so that passengers can enjoy the display of digital content through a train window. To effectively estimate the position of the train in a tunnel, we propose counting sleepers, which are installed at regular interval along the railway, using a distance sensor. The tunnel profile is constructed using pointclouds captured by a depth camera installed next to the projector. The tunnel profile is used to identify projectable sections that will not contain too much interference by possible occluders. The tunnel profile is also used to retrieve the depth at a specific location so that a properly warped content can be projected for viewing by passengers through the window when the train is moving at runtime. Here, we show that the proposed system can operate on an actual train. CCS Concepts • Computing methodologies → Mixed / augmented reality;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"46 1","pages":"87-91"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83243427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GANST: Gradient-aware Arbitrary Neural Style Transfer 梯度感知任意神经风格迁移
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211399
Haichao Zhu
{"title":"GANST: Gradient-aware Arbitrary Neural Style Transfer","authors":"Haichao Zhu","doi":"10.2312/PG.20211399","DOIUrl":"https://doi.org/10.2312/PG.20211399","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"29 1","pages":"93-98"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89830249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data 基于前网格三维数据的体积视频流数据约简方法
Pub Date : 2021-01-01 DOI: 10.2312/pg.20211395
X. Zhao, T. Okuyama
Volumetric video contents are attracting much attention across various industries for their six-degrees-of-freedom (6DoF) viewing experience. However, in terms of streaming, volumetric video contents still present challenges such as high data volume and bandwidth consumption, which results in high stress on the network. To solve this issue, we propose a method using frontmesh 3D data to reduce the data size without affecting the visual quality much from a user’s perspective. The proposed method also reduces decoding and import time on the client side, which enables faster playback of 3D data. We evaluated our method in terms of data reduction and computation complexity and conducted a qualitative analysis by comparing rendering results with reference data at different diagonal angles. Our method successfully reduces data volume and computation complexity with minimal visual quality loss. CCS Concepts • Information systems → Multimedia streaming; • Computing methodologies → Image compression;
体积视频内容因其六自由度(6DoF)的观看体验而受到各行业的广泛关注。然而,在流媒体方面,大容量视频内容仍然面临着数据量大和带宽消耗等挑战,这给网络带来了很大的压力。为了解决这个问题,我们提出了一种使用前网格3D数据的方法,在不影响用户视觉质量的情况下减少数据大小。该方法还减少了客户端的解码和导入时间,从而可以更快地播放3D数据。我们从数据减少和计算复杂度方面对我们的方法进行了评估,并将不同对角角下的渲染结果与参考数据进行了定性分析。该方法以最小的视觉质量损失成功地减少了数据量和计算复杂度。•信息系统→多媒体流;•计算方法→图像压缩;
{"title":"Volumetric Video Streaming Data Reduction Method Using Front-mesh 3D Data","authors":"X. Zhao, T. Okuyama","doi":"10.2312/pg.20211395","DOIUrl":"https://doi.org/10.2312/pg.20211395","url":null,"abstract":"Volumetric video contents are attracting much attention across various industries for their six-degrees-of-freedom (6DoF) viewing experience. However, in terms of streaming, volumetric video contents still present challenges such as high data volume and bandwidth consumption, which results in high stress on the network. To solve this issue, we propose a method using frontmesh 3D data to reduce the data size without affecting the visual quality much from a user’s perspective. The proposed method also reduces decoding and import time on the client side, which enables faster playback of 3D data. We evaluated our method in terms of data reduction and computation complexity and conducted a qualitative analysis by comparing rendering results with reference data at different diagonal angles. Our method successfully reduces data volume and computation complexity with minimal visual quality loss. CCS Concepts • Information systems → Multimedia streaming; • Computing methodologies → Image compression;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"65 1","pages":"73-74"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79413383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Proxy: Empowering Neural Volume Rendering for Animation 神经代理:增强动画的神经体渲染
Pub Date : 2021-01-01 DOI: 10.2312/pg.20211384
Zackary P. T. Sin, P. H. F. Ng, H. Leong
{"title":"Neural Proxy: Empowering Neural Volume Rendering for Animation","authors":"Zackary P. T. Sin, P. H. F. Ng, H. Leong","doi":"10.2312/pg.20211384","DOIUrl":"https://doi.org/10.2312/pg.20211384","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"25 1","pages":"31-36"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85547911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer 3D- carinet:端到端的3D漫画生成从自然面孔图像与可微分渲染
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211387
Meijia Huang, Ju Dai, Junjun Pan, Junxuan Bai, Hong Qin
Caricatures are an artistic representation of human faces to express satire and humor. Caricature generation of human faces is a hotspot in CG research. Previous work mainly focuses on 2D caricatures generation from face photos or 3D caricature reconstruction from caricature images. In this paper, we propose a novel end-to-end method to directly generate personalized 3D caricatures from a single natural face image. It can create not only exaggerated geometric shapes, but also heterogeneous texture styles. Firstly, we construct a synthetic dataset containing matched data pairs composed of face photos, caricature images, and 3D caricatures. Then, we design a graph convolutional autoencoder to build a non-linear colored mesh model to learn the shape and texture of 3D caricatures. To make the network end-to-end trainable, we incorporate a differentiable renderer to render 3D caricatures into caricature images inversely. Experiments demonstrate that our method can achieve 3D caricature generation with various texture styles from face images while maintaining personality characteristics.
漫画是一种用人脸来表达讽刺和幽默的艺术表现。人脸漫画生成是CG研究的一个热点。以前的工作主要集中在从人脸照片生成二维漫画或从漫画图像重建三维漫画。在本文中,我们提出了一种新颖的端到端方法,从单个自然面部图像直接生成个性化的3D漫画。它不仅可以创造夸张的几何形状,还可以创造异质的纹理风格。首先,我们构建了一个包含人脸照片、漫画图像和3D漫画图像组成的匹配数据对的合成数据集。然后,我们设计了一个图形卷积自编码器来建立一个非线性的彩色网格模型来学习三维漫画的形状和纹理。为了使网络端到端可训练,我们结合了一个可微分渲染器,将3D漫画反向渲染为漫画图像。实验表明,该方法可以在保持人脸个性特征的前提下,实现多种纹理风格的三维漫画生成。
{"title":"3D-CariNet: End-to-end 3D Caricature Generation from Natural Face Images with Differentiable Renderer","authors":"Meijia Huang, Ju Dai, Junjun Pan, Junxuan Bai, Hong Qin","doi":"10.2312/PG.20211387","DOIUrl":"https://doi.org/10.2312/PG.20211387","url":null,"abstract":"Caricatures are an artistic representation of human faces to express satire and humor. Caricature generation of human faces is a hotspot in CG research. Previous work mainly focuses on 2D caricatures generation from face photos or 3D caricature reconstruction from caricature images. In this paper, we propose a novel end-to-end method to directly generate personalized 3D caricatures from a single natural face image. It can create not only exaggerated geometric shapes, but also heterogeneous texture styles. Firstly, we construct a synthetic dataset containing matched data pairs composed of face photos, caricature images, and 3D caricatures. Then, we design a graph convolutional autoencoder to build a non-linear colored mesh model to learn the shape and texture of 3D caricatures. To make the network end-to-end trainable, we incorporate a differentiable renderer to render 3D caricatures into caricature images inversely. Experiments demonstrate that our method can achieve 3D caricature generation with various texture styles from face images while maintaining personality characteristics.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"267 1","pages":"49-54"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79816767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Lightweight Path Guiding Algorithm on GPU 基于GPU的快速轻量级路径引导算法
Pub Date : 2021-01-01 DOI: 10.2312/pg.20211379
Juhyeon Kim, Y. Kim
We propose a simple, yet practical path guiding algorithm that runs on GPU. Path guiding renders photo-realistic images by simulating the iterative bounces of rays, which are sampled from the radiance distribution. The radiance distribution is often learned by serially updating the hierarchical data structure to represent complex scene geometry, which is not easily implemented with GPU. In contrast, we employ a regular data structure and allow fast updates by processing a significant number of rays with GPU. We further increase the efficiency of radiance learning by employing SARSA [SB18] used in reinforcement learning. SARSA does not include aggregation of incident radiance from all directions nor storing all of the previous paths. The learned distribution is then sampled with an optimized rejection sampling, which adapts the current surface normal to reflect finer geometry than the grid resolution. All of the algorithms have been implemented on GPU using megakernal architecture with NVIDIA OptiX [PBD*10]. Through numerous experiments on complex scenes, we demonstrate that our proposed path guiding algorithm works efficiently on GPU, drastically reducing the number of wasted paths. CCS Concepts • Computing methodologies → Ray tracing; Reinforcement learning; Massively parallel algorithms;
我们提出了一个简单而实用的路径引导算法,该算法运行在GPU上。路径引导通过模拟从亮度分布中采样的光线的迭代反弹来呈现逼真的图像。亮度分布通常通过连续更新分层数据结构来学习,以表示复杂的场景几何,这在GPU上不容易实现。相比之下,我们采用常规数据结构,并通过GPU处理大量光线来实现快速更新。我们通过在强化学习中使用SARSA [SB18]进一步提高了辐射学习的效率。SARSA不包括来自所有方向的入射辐射的聚合,也不存储所有以前的路径。然后使用优化的抑制采样对学习到的分布进行采样,该采样适应当前表面法线以反映比网格分辨率更精细的几何形状。所有算法都在GPU上实现,使用的是带有NVIDIA OptiX的megakernal架构[PBD*10]。通过对复杂场景的大量实验,我们证明了我们提出的路径引导算法在GPU上有效地工作,大大减少了浪费的路径数量。CCS概念•计算方法→光线追踪;强化学习;大规模并行算法;
{"title":"Fast and Lightweight Path Guiding Algorithm on GPU","authors":"Juhyeon Kim, Y. Kim","doi":"10.2312/pg.20211379","DOIUrl":"https://doi.org/10.2312/pg.20211379","url":null,"abstract":"We propose a simple, yet practical path guiding algorithm that runs on GPU. Path guiding renders photo-realistic images by simulating the iterative bounces of rays, which are sampled from the radiance distribution. The radiance distribution is often learned by serially updating the hierarchical data structure to represent complex scene geometry, which is not easily implemented with GPU. In contrast, we employ a regular data structure and allow fast updates by processing a significant number of rays with GPU. We further increase the efficiency of radiance learning by employing SARSA [SB18] used in reinforcement learning. SARSA does not include aggregation of incident radiance from all directions nor storing all of the previous paths. The learned distribution is then sampled with an optimized rejection sampling, which adapts the current surface normal to reflect finer geometry than the grid resolution. All of the algorithms have been implemented on GPU using megakernal architecture with NVIDIA OptiX [PBD*10]. Through numerous experiments on complex scenes, we demonstrate that our proposed path guiding algorithm works efficiently on GPU, drastically reducing the number of wasted paths. CCS Concepts • Computing methodologies → Ray tracing; Reinforcement learning; Massively parallel algorithms;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"104 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80677378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings. Pacific Conference on Computer Graphics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1