首页 > 最新文献

Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

英文 中文
Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines 使用多尺度线性变换余弦的实时抗走样区域照明
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211380
Chengzhi Tao, Jie Guo, Chen Gong, Beibei Wang, Yanwen Guo
We present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting. CCS Concepts • Computing methodologies → Reflectance modeling;
提出了一种基于线性变换余弦(LTCs)的局部灯光抗混叠实时渲染方法。由于忽略了中尺度粗糙度(由法线贴图引起),它显著地减少了从区域光反射的高光的混叠现象。该方法将表面粗糙度划分为不同的尺度,并用ltc表示。然后,对它们进行球面卷积,得到总体正态分布和最终的双向反射分布函数(BRDF)。整体表面粗糙度通过多项式函数进一步逼近,以保证高效率和避免额外的存储消耗。实验结果表明,我们的方法在局部照明的可视距离范围内产生了令人信服的多尺度粗糙度结果。CCS概念•计算方法→反射建模;
{"title":"Real-Time Antialiased Area Lighting Using Multi-Scale Linearly Transformed Cosines","authors":"Chengzhi Tao, Jie Guo, Chen Gong, Beibei Wang, Yanwen Guo","doi":"10.2312/PG.20211380","DOIUrl":"https://doi.org/10.2312/PG.20211380","url":null,"abstract":"We present an anti-aliased real-time rendering method for local area lights based on Linearly Transformed Cosines (LTCs). It significantly reduces the aliasing artifacts of highlights reflected from area lights due to ignoring the meso-scale roughness (induced by normal maps). The proposed method separates the surface roughness into different scales and represents them all by LTCs. Then, spherical convolution is conducted between them to derive the overall normal distribution and the final Bidirectional Reflectance Distribution Function (BRDF). The overall surface roughness is further approximated by a polynomial function to guarantee high efficiency and avoid additional storage consumption. Experimental results show that our approach produces convincing results of multi-scale roughness across a range of viewing distances for local area lighting. CCS Concepts • Computing methodologies → Reflectance modeling;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"30 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81479488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Art-directing Appearance using an Environment Map Latent Space 使用环境地图潜在空间的艺术指导外观
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211386
Lohit Petikam, Andrew Chalmers, K. Anjyo, Taehyun Rhee
In look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention. CCS Concepts • Computing methodologies → Dimensionality reduction and manifold learning; Rendering;
在外观开发中,环境地图(EMs)用于验证不同光线下的3D外观(例如,阴天、晴天和室内)。美工只能分配一种固定的材料,这使得为所有的EMs编辑独特的外观变得很费力。艺术家可以在电影后期制作中对材料和灯光进行艺术指导。然而,这在动态实时游戏和现场增强现实(AR)中是不可能的,因为环境照明是不可预测的。我们提出了一种新的工作流程来定制各种EM照明的外观变化,用于现场应用。外观编辑可以预先定义,然后自动适应环境照明的变化。我们通过学习不同EM照明的新型2D潜在空间来实现这一目标。潜在空间让艺术家可以在语义上有意义的2D视图中浏览em。对于不同的潜在空间,艺术家可以直接在潜在空间上绘制不同的材料和照明参数值。我们稳健地将新的em编码到相同的空间中,以便自动查找所需的外观。这解决了在没有任何艺术家干预的情况下在实时应用程序中保留艺术方向的新问题。•计算方法→降维和流形学习;呈现;
{"title":"Art-directing Appearance using an Environment Map Latent Space","authors":"Lohit Petikam, Andrew Chalmers, K. Anjyo, Taehyun Rhee","doi":"10.2312/PG.20211386","DOIUrl":"https://doi.org/10.2312/PG.20211386","url":null,"abstract":"In look development, environment maps (EMs) are used to verify 3D appearance in varied lighting (e.g., overcast, sunny, and indoor). Artists can only assign one fixed material, making it laborious to edit appearance uniquely for all EMs. Artists can artdirect material and lighting in film post-production. However, this is impossible in dynamic real-time games and live augmented reality (AR), where environment lighting is unpredictable. We present a new workflow to customize appearance variation across a wide range of EM lighting, for live applications. Appearance edits can be predefined, and then automatically adapted to environment lighting changes. We achieve this by learning a novel 2D latent space of varied EM lighting. The latent space lets artists browse EMs in a semantically meaningful 2D view. For different EMs, artists can paint different material and lighting parameter values directly on the latent space. We robustly encode new EMs into the same space, for automatic look-up of the desired appearance. This solves a new problem of preserving art-direction in live applications, without any artist intervention. CCS Concepts • Computing methodologies → Dimensionality reduction and manifold learning; Rendering;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"35 1","pages":"43-48"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86695933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Motion Synthesis and Control via Contextual Manifold Embedding 基于上下文流形嵌入的人体运动合成与控制
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211383
Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan, Hong Qin
Modeling motion dynamics for precise and rapid control by deterministic data-driven models is challenging due to the natural randomness of human motion. To address it, we propose a novel framework for continuous motion control by probabilistic latent variable models. The control is implemented by recurrently querying between historical and target motion states rather than exact motion data. Our model takes a conditional encoder-decoder form in two stages. Firstly, we utilize Gaussian Process Latent Variable Model (GPLVM) to project motion poses to a compact latent manifold. Motion states could be clearly recognized by analyzing on the manifold, such as walking phase and forwarding velocity. Secondly, taking manifold as prior, a Recurrent Neural Network (RNN) encoder makes temporal latent prediction from the previous and control states. An attention module then morphs the prediction by measuring latent similarities to control states and predicted states, thus dynamically preserving contextual consistency. In the end, the GP decoder reconstructs motion states back to motion frames. Experiments on walking datasets show that our model is able to maintain motion states autoregressively while performing rapid and smooth transitions for the control. CCS Concepts • Computing methodologies → Motion processing; Motion capture; Motion path planning; Learning latent representations;
由于人体运动的自然随机性,通过确定性数据驱动模型进行精确和快速控制的运动动力学建模具有挑战性。为了解决这个问题,我们提出了一种新的基于概率潜变量模型的连续运动控制框架。该控制是通过在历史和目标运动状态之间循环查询而不是精确的运动数据来实现的。我们的模型分为两个阶段采用条件编码器-解码器形式。首先,我们利用高斯过程隐变量模型(GPLVM)将运动姿态投影到一个紧凑的隐流形上。通过对行走相位、前进速度等流形的分析,可以清晰地识别运动状态。其次,以流形为先验,递归神经网络(RNN)编码器从先验状态和控制状态进行时间潜在预测。然后,注意模块通过测量与控制状态和预测状态的潜在相似性来变形预测,从而动态地保持上下文一致性。最后,GP解码器将运动状态重构回运动帧。在步行数据集上的实验表明,我们的模型能够自回归地保持运动状态,同时为控制执行快速平稳的过渡。•计算方法→运动处理;动作捕捉;运动路径规划;学习潜在表征;
{"title":"Human Motion Synthesis and Control via Contextual Manifold Embedding","authors":"Rui Zeng, Ju Dai, Junxuan Bai, Junjun Pan, Hong Qin","doi":"10.2312/PG.20211383","DOIUrl":"https://doi.org/10.2312/PG.20211383","url":null,"abstract":"Modeling motion dynamics for precise and rapid control by deterministic data-driven models is challenging due to the natural randomness of human motion. To address it, we propose a novel framework for continuous motion control by probabilistic latent variable models. The control is implemented by recurrently querying between historical and target motion states rather than exact motion data. Our model takes a conditional encoder-decoder form in two stages. Firstly, we utilize Gaussian Process Latent Variable Model (GPLVM) to project motion poses to a compact latent manifold. Motion states could be clearly recognized by analyzing on the manifold, such as walking phase and forwarding velocity. Secondly, taking manifold as prior, a Recurrent Neural Network (RNN) encoder makes temporal latent prediction from the previous and control states. An attention module then morphs the prediction by measuring latent similarities to control states and predicted states, thus dynamically preserving contextual consistency. In the end, the GP decoder reconstructs motion states back to motion frames. Experiments on walking datasets show that our model is able to maintain motion states autoregressively while performing rapid and smooth transitions for the control. CCS Concepts • Computing methodologies → Motion processing; Motion capture; Motion path planning; Learning latent representations;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"10 3 1","pages":"25-30"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73673739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image SM-NET:从单个真实世界图像重建三维结构化网格模型
Pub Date : 2021-01-01 DOI: 10.2312/PG.20211388
Yue Yu, Ying Li, Jingyi Zhang, Yue Yang
Image-based 3D structured model reconstruction enables the network to learn the missing information between the dimensions and understand the structure of the 3D model. In this paper, SM-NET is proposed in order to reconstruct 3D structured mesh model based on single real-world image. First, it considers the model as a sequence of parts and designs a shape autoencoder to autoencode 3D model. Second, the network extracts 2.5D information from the real-world image and maps it to the latent space of the shape autoencoder. Finally, both are connected to complete the reconstruction task. Besides, a more reasonable 3D structured model dataset is built to enhance the effect of reconstruction. The experimental results show that we achieve the reconstruction of 3D structured mesh model based on single real-world image, outperforming other approaches.
基于图像的三维结构化模型重构使网络能够学习到维度之间缺失的信息,从而理解三维模型的结构。本文提出了基于真实世界单幅图像重建三维结构化网格模型的SM-NET方法。首先,将模型视为零件序列,设计形状自编码器对三维模型进行自编码;其次,该网络从真实图像中提取2.5D信息,并将其映射到形状自编码器的隐空间中。最后,将两者连接起来以完成重建任务。此外,构建了更为合理的三维结构化模型数据集,增强了重建效果。实验结果表明,我们实现了基于单幅真实世界图像的三维结构化网格模型重建,优于其他方法。
{"title":"SM-NET: Reconstructing 3D Structured Mesh Models from Single Real-World Image","authors":"Yue Yu, Ying Li, Jingyi Zhang, Yue Yang","doi":"10.2312/PG.20211388","DOIUrl":"https://doi.org/10.2312/PG.20211388","url":null,"abstract":"Image-based 3D structured model reconstruction enables the network to learn the missing information between the dimensions and understand the structure of the 3D model. In this paper, SM-NET is proposed in order to reconstruct 3D structured mesh model based on single real-world image. First, it considers the model as a sequence of parts and designs a shape autoencoder to autoencode 3D model. Second, the network extracts 2.5D information from the real-world image and maps it to the latent space of the shape autoencoder. Finally, both are connected to complete the reconstruction task. Besides, a more reasonable 3D structured model dataset is built to enhance the effect of reconstruction. The experimental results show that we achieve the reconstruction of 3D structured mesh model based on single real-world image, outperforming other approaches.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"37 1","pages":"55-60"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82454422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning based interactive sketching system for fashion images design 基于深度学习的时尚图像设计交互素描系统
Pub Date : 2020-10-09 DOI: 10.2312/PG.20201224
Yao Li, Xianggang Yu, Xiaoguang Han, Nianjuan Jiang, K. Jia, Jiangbo Lu
In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.
在这项工作中,我们提出了一个交互系统,从时装草图和纹理信息中设计出各种高质量的服装图像。该系统面临的主要挑战是根据用户提供的纹理信息生成高质量和详细的纹理。以往的作品主要使用纹理补丁表示,试图将一个小的纹理补丁映射到整幅服装图像,因此无法生成高质量的细节。相反,受图像固有分解的启发,我们将该任务分解为纹理合成和阴影增强。特别地,我们提出了一种新的双色边缘纹理表示来合成纹理服装图像,并提出了一种基于灰度边缘的阴影增强器来渲染阴影。双色边缘表示提供了简单而有效的纹理线索和颜色约束,从而可以更好地重建细节。此外,通过渲染阴影,合成的服装图像更加逼真。
{"title":"A deep learning based interactive sketching system for fashion images design","authors":"Yao Li, Xianggang Yu, Xiaoguang Han, Nianjuan Jiang, K. Jia, Jiangbo Lu","doi":"10.2312/PG.20201224","DOIUrl":"https://doi.org/10.2312/PG.20201224","url":null,"abstract":"In this work, we propose an interactive system to design diverse high-quality garment images from fashion sketches and the texture information. The major challenge behind this system is to generate high-quality and detailed texture according to the user-provided texture information. Prior works mainly use the texture patch representation and try to map a small texture patch to a whole garment image, hence unable to generate high-quality details. In contrast, inspired by intrinsic image decomposition, we decompose this task into texture synthesis and shading enhancement. In particular, we propose a novel bi-colored edge texture representation to synthesize textured garment images and a shading enhancer to render shading based on the grayscale edges. The bi-colored edge representation provides simple but effective texture cues and color constraints, so that the details can be better reconstructed. Moreover, with the rendered shading, the synthesized garment image becomes more vivid.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"125 1","pages":"13-18"},"PeriodicalIF":0.0,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79025771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using Landmarks for Near-Optimal Pathfinding on the CPU and GPU 在CPU和GPU上使用地标进行近最优寻路
Pub Date : 2020-01-01 DOI: 10.2312/pg.20201228
M. Reischl, Christian Knauer, M. Guthe
{"title":"Using Landmarks for Near-Optimal Pathfinding on the CPU and GPU","authors":"M. Reischl, Christian Knauer, M. Guthe","doi":"10.2312/pg.20201228","DOIUrl":"https://doi.org/10.2312/pg.20201228","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"37-42"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88299927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stroke Synthesis for Inbetweening of Rough Line Animations 粗线动画中间的笔画合成
Pub Date : 2020-01-01 DOI: 10.2312/pg.20201233
Jiazhou Chen, Xinding Zhu, P. Bénard, Pascal Barla
{"title":"Stroke Synthesis for Inbetweening of Rough Line Animations","authors":"Jiazhou Chen, Xinding Zhu, P. Bénard, Pascal Barla","doi":"10.2312/pg.20201233","DOIUrl":"https://doi.org/10.2312/pg.20201233","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"40 1","pages":"51-52"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74627227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simple Simulation of Curved Folds Based on Ruling-aware Triangulation 基于规则感知三角剖分的弯曲褶皱简单仿真
Pub Date : 2020-01-01 DOI: 10.2312/pg.20201227
Kosuke Sasaki, J. Mitani
Folding a thin sheet material such as paper along curves creates a developable surface composed of ruled surface patches. When using such surfaces in design, designers often repeat a process of folding along curves drawn on a sheet and checking the folded shape. Although several methods for constructing such shapes on a computer have been proposed, it is still difficult to check the folded shapes instantly from the crease patterns.In this paper, we propose a simple method that approximately realizes a simulation of curved folds with a triangular mesh from its crease pattern. The proposed method first approximates curves in a crease pattern with polylines and then generates a triangular mesh. In order to construct the discretized developable surface, the edges in the mesh are rearranged so that they align with the estimated rulings. The proposed method is characterized by its simplicity and is implemented on an existing origami simulator that runs in a web browser. CCS Concepts • Computing methodologies → Mesh models; Mesh geometry models;
沿着曲线折叠薄的材料,如纸,会产生由直纹表面斑块组成的可展开表面。当在设计中使用这种表面时,设计师经常沿着在薄片上绘制的曲线重复折叠过程并检查折叠形状。虽然已经提出了几种在计算机上构造这种形状的方法,但从折痕图中立即检查折叠形状仍然很困难。本文提出了一种简单的方法,从三角形网格的折痕图近似实现了对弯曲褶皱的模拟。该方法首先用折线逼近折痕图中的曲线,然后生成三角形网格。为了构造离散化的可展开曲面,对网格中的边缘进行重新排列,使其与估计的规则对齐。提出的方法具有简单的特点,并在现有的折纸模拟器上实现,该模拟器运行在web浏览器中。•计算方法→网格模型;网格几何模型;
{"title":"Simple Simulation of Curved Folds Based on Ruling-aware Triangulation","authors":"Kosuke Sasaki, J. Mitani","doi":"10.2312/pg.20201227","DOIUrl":"https://doi.org/10.2312/pg.20201227","url":null,"abstract":"Folding a thin sheet material such as paper along curves creates a developable surface composed of ruled surface patches. When using such surfaces in design, designers often repeat a process of folding along curves drawn on a sheet and checking the folded shape. Although several methods for constructing such shapes on a computer have been proposed, it is still difficult to check the folded shapes instantly from the crease patterns.In this paper, we propose a simple method that approximately realizes a simulation of curved folds with a triangular mesh from its crease pattern. The proposed method first approximates curves in a crease pattern with polylines and then generates a triangular mesh. In order to construct the discretized developable surface, the edges in the mesh are rearranged so that they align with the estimated rulings. The proposed method is characterized by its simplicity and is implemented on an existing origami simulator that runs in a web browser. CCS Concepts • Computing methodologies → Mesh models; Mesh geometry models;","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"66 1","pages":"31-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90073961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Monocular 3D Fluid Volume Reconstruction Based on a Multilayer External Force Guiding Model 基于多层外力导向模型的单眼三维流体体积重建
Pub Date : 2020-01-01 DOI: 10.2312/pg.20201225
Zhiyuan Su, Xiaoying Nie, Xukun Shen, Yong Hu
{"title":"Monocular 3D Fluid Volume Reconstruction Based on a Multilayer External Force Guiding Model","authors":"Zhiyuan Su, Xiaoying Nie, Xukun Shen, Yong Hu","doi":"10.2312/pg.20201225","DOIUrl":"https://doi.org/10.2312/pg.20201225","url":null,"abstract":"","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"34 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77566167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Day-to-Night Road Scene Image Translation Using Semantic Segmentation 基于语义分割的日夜道路场景图像翻译
Pub Date : 2020-01-01 DOI: 10.2312/pg.20201231
S. Baek, Sungkil Lee
We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation
{"title":"Day-to-Night Road Scene Image Translation Using Semantic Segmentation","authors":"S. Baek, Sungkil Lee","doi":"10.2312/pg.20201231","DOIUrl":"https://doi.org/10.2312/pg.20201231","url":null,"abstract":"We present a semi-automated framework that translates day-time domain road scene images to those for the night-time domain. Unlike recent studies based on the Generative Adversarial Networks (GANs), we avoid learning for the translation without random failures. Our framework uses semantic annotation to extract scene elements, perceives a scene structure/depth, and applies per-element translation. Experimental results demonstrate that our framework can synthesize higher-resolution results without artifacts in the translation","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"203 1","pages":"47-48"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77018563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings. Pacific Conference on Computer Graphics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1