首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
Measuring microstructures using confocal laser scanning microscopy for estimating surface roughness 用共聚焦激光扫描显微镜测量微结构以估计表面粗糙度
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945106
Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura
Realistic image synthesis is an important research goal in computer graphics. One important factor to achieve this goal is a bidirectional reflectance distribution function (BRDF) that mainly governs an appearance of an object. Many BRDF models have therefore been developed. A physically-based BRDF based on microfacet theory [Cook and Torrance 1982] is widely used in many applications since it can produce highly realistic images. The microfacetbased BRDF consists of three terms; a Fresnel, a normal distribution, and a geometric functions. There are many analytical and approximate models for each of these terms.
真实感图像合成是计算机图形学的一个重要研究方向。实现这一目标的一个重要因素是双向反射分布函数(BRDF),它主要控制物体的外观。因此,开发了许多BRDF模型。基于microfacet理论的物理BRDF [Cook and Torrance 1982]由于可以产生高度逼真的图像,因此被广泛应用于许多应用中。基于微面的BRDF包括三个术语;菲涅耳分布,正态分布和几何函数。每个术语都有许多解析模型和近似模型。
{"title":"Measuring microstructures using confocal laser scanning microscopy for estimating surface roughness","authors":"Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura","doi":"10.1145/2945078.2945106","DOIUrl":"https://doi.org/10.1145/2945078.2945106","url":null,"abstract":"Realistic image synthesis is an important research goal in computer graphics. One important factor to achieve this goal is a bidirectional reflectance distribution function (BRDF) that mainly governs an appearance of an object. Many BRDF models have therefore been developed. A physically-based BRDF based on microfacet theory [Cook and Torrance 1982] is widely used in many applications since it can produce highly realistic images. The microfacetbased BRDF consists of three terms; a Fresnel, a normal distribution, and a geometric functions. There are many analytical and approximate models for each of these terms.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115293695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A method for realistic 3D projection mapping using multiple projectors 一种使用多个投影仪的逼真三维投影映射方法
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945154
Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee
Recently researchers have shown much interest in 3D projection mapping systems but relatively less work has been done to make the contents look realistic. Much work has been done for multi-projector blending, 3D projection mapping and multi-projector based large displays but existing color compensation based systems still suffer from contrast compression, color inconsistencies and inappropriate luminance over the three dimensional projection surface giving rise to an un-appealing appearance. Until now having a realistic result with projection mapping on 3D objects when compared with a similar original object still remains a challenge. In this paper, we present a framework that optimizes projected images using multiple projectors in order to achieve an appearance that looks close to a real object whose appearance is being regenerated by projection mapping.
最近,研究人员对3D投影映射系统表现出了很大的兴趣,但在使内容看起来逼真方面所做的工作相对较少。对于多投影机混合、3D投影映射和基于多投影机的大型显示器,已经做了很多工作,但是现有的基于颜色补偿的系统仍然受到对比度压缩、颜色不一致和三维投影表面上不适当的亮度的影响,从而导致不吸引人的外观。到目前为止,与类似的原始物体相比,在3D物体上进行投影映射的逼真结果仍然是一个挑战。在本文中,我们提出了一个框架,该框架使用多个投影仪来优化投影图像,以实现看起来接近真实物体的外观,其外观是通过投影映射再生的。
{"title":"A method for realistic 3D projection mapping using multiple projectors","authors":"Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945154","DOIUrl":"https://doi.org/10.1145/2945078.2945154","url":null,"abstract":"Recently researchers have shown much interest in 3D projection mapping systems but relatively less work has been done to make the contents look realistic. Much work has been done for multi-projector blending, 3D projection mapping and multi-projector based large displays but existing color compensation based systems still suffer from contrast compression, color inconsistencies and inappropriate luminance over the three dimensional projection surface giving rise to an un-appealing appearance. Until now having a realistic result with projection mapping on 3D objects when compared with a similar original object still remains a challenge. In this paper, we present a framework that optimizes projected images using multiple projectors in order to achieve an appearance that looks close to a real object whose appearance is being regenerated by projection mapping.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera calibration by recovering projected centers of circle pairs 通过恢复圆对的投影中心来标定相机
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945117
Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto
In this paper, we present a convenient method for camera calibration with arbitrary co-planar circle-pairs from one image. This method is based on the accurate recovery of the projected centers of the circle pairs using a closed-form algorithm.
本文提出了一种利用任意共面圆对进行摄像机标定的简便方法。该方法是基于使用封闭算法精确恢复圆对的投影中心。
{"title":"Camera calibration by recovering projected centers of circle pairs","authors":"Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto","doi":"10.1145/2945078.2945117","DOIUrl":"https://doi.org/10.1145/2945078.2945117","url":null,"abstract":"In this paper, we present a convenient method for camera calibration with arbitrary co-planar circle-pairs from one image. This method is based on the accurate recovery of the projected centers of the circle pairs using a closed-form algorithm.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Real time 360° video stitching and streaming 实时360°视频拼接和流媒体
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945148
Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro
In this paper we propose a real time 360° video stitching and streaming processing methodology focused on GPU. The solution creates a scalable solution for large resolutions, such as 4K and 8K per camera, and supports broadcasting solutions with cloud architectures. The methodology uses a group of deformable meshes, processed using OpenGL (GLSL) and the final image combine the inputs using a robust pixel shader. Moreover, the result can be streamed to a cloud service using h.264 encoding with nVEnc GPU encoding. Finally, we present some results.
本文提出了一种基于GPU的实时360°视频拼接和流处理方法。该解决方案为大分辨率(如每个摄像机4K和8K)创建了可扩展的解决方案,并支持具有云架构的广播解决方案。该方法使用一组可变形网格,使用OpenGL (GLSL)进行处理,最终图像使用鲁棒像素着色器组合输入。此外,结果可以流式传输到云服务使用h.264编码与nVEnc GPU编码。最后,我们给出了一些结果。
{"title":"Real time 360° video stitching and streaming","authors":"Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro","doi":"10.1145/2945078.2945148","DOIUrl":"https://doi.org/10.1145/2945078.2945148","url":null,"abstract":"In this paper we propose a real time 360° video stitching and streaming processing methodology focused on GPU. The solution creates a scalable solution for large resolutions, such as 4K and 8K per camera, and supports broadcasting solutions with cloud architectures. The methodology uses a group of deformable meshes, processed using OpenGL (GLSL) and the final image combine the inputs using a robust pixel shader. Moreover, the result can be streamed to a cloud service using h.264 encoding with nVEnc GPU encoding. Finally, we present some results.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121705267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Real-time rendering of high-quality effects using multi-frame sampling 使用多帧采样的高质量效果的实时渲染
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945157
Daniel Limberger, J. Döllner
In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.
渲染环境相对稀疏的交互,例如,数字生产工具、图像合成和其质量不必局限于单帧。这项工作分析了在实时中使用渐进式多帧采样来高度经济地呈现最先进的渲染效果的策略。通过在多个帧上分布和积累基于采样的渲染技术(例如,抗混叠、与顺序无关的透明度、基于物理的景深和阴影、环境遮挡、反射)的样本,可以以无与伦比的资源效率合成非常高质量的图像。
{"title":"Real-time rendering of high-quality effects using multi-frame sampling","authors":"Daniel Limberger, J. Döllner","doi":"10.1145/2945078.2945157","DOIUrl":"https://doi.org/10.1145/2945078.2945157","url":null,"abstract":"In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124756286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A data-driven BSDF framework 数据驱动的BSDF框架
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945109
Murat Kurt, G. Ward, Nicolas Bonneel
We present a data-driven Bidirectional Scattering Distribution Function (BSDF) representation and a model-free technique that preserves the integrity of the original data and interpolates reflection as well as transmission functions for arbitrary materials. Our interpolation technique employs Radial Basis Functions (RBFs), Radial Basis Systems (RBSs) and displacement techniques to track peaks in the distribution. The proposed data-driven BSDF representation can be used to render arbitrary BSDFs and includes an efficient Monte Carlo importance sampling scheme. We show that our data-driven BSDF framework can be used to represent measured BSDFs that are visually plausible and demonstrably accurate.
我们提出了一种数据驱动的双向散射分布函数(BSDF)表示和一种无模型技术,该技术保留了原始数据的完整性,并对任意材料插入了反射和透射函数。我们的插值技术采用径向基函数(rbf)、径向基系统(RBSs)和位移技术来跟踪分布中的峰值。所提出的数据驱动的BSDF表示可用于呈现任意BSDF,并包括一个有效的蒙特卡罗重要采样方案。我们表明,我们的数据驱动的BSDF框架可以用来表示测量的BSDF,这些BSDF在视觉上是可信的,并且可以证明是准确的。
{"title":"A data-driven BSDF framework","authors":"Murat Kurt, G. Ward, Nicolas Bonneel","doi":"10.1145/2945078.2945109","DOIUrl":"https://doi.org/10.1145/2945078.2945109","url":null,"abstract":"We present a data-driven Bidirectional Scattering Distribution Function (BSDF) representation and a model-free technique that preserves the integrity of the original data and interpolates reflection as well as transmission functions for arbitrary materials. Our interpolation technique employs Radial Basis Functions (RBFs), Radial Basis Systems (RBSs) and displacement techniques to track peaks in the distribution. The proposed data-driven BSDF representation can be used to render arbitrary BSDFs and includes an efficient Monte Carlo importance sampling scheme. We show that our data-driven BSDF framework can be used to represent measured BSDFs that are visually plausible and demonstrably accurate.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115121393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal LED selection for multispectral lighting reproduction 多光谱照明再现的最佳LED选择
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945150
Chloe LeGendre, Xueming Yu, P. Debevec
We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.
我们证明了在多光谱照明再现中使用5个不同光谱的led是足够的,并从11个这样的商用led中求解出5个的最佳组合。我们利用已发布的光谱反射率、光源和相机光谱灵敏度数据集来显示照明再现的两种方法,即直接匹配光源光谱和匹配由一台或多台相机或人类观察者观察到的材料颜色外观,产生相同的LED选择。我们提出的最佳五组led包括具有窄发射光谱的红色,绿色和蓝色,以及具有较宽光谱的白色和琥珀色。
{"title":"Optimal LED selection for multispectral lighting reproduction","authors":"Chloe LeGendre, Xueming Yu, P. Debevec","doi":"10.1145/2945078.2945150","DOIUrl":"https://doi.org/10.1145/2945078.2945150","url":null,"abstract":"We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129558720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
3D facial geometry reconstruction using patch database 基于补丁数据库的三维人脸几何重建
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945102
Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima
3D facial shape reconstruction in the wild environments is an important research task in the field of CG and CV. This is because it can be applied to a lot of products, such as 3DCG video games and face recognition. One of the most popular 3D facial shape reconstruction techniques is 3D Model-based approach. This approach approximates a facial shape by using 3D face model, which is calculated by principal component analysis. [Blanz and Vetter 1999] performed a 3D facial reconstruction by fitting points from facial feature points of an input of single facial image to vertex of template 3D facial model named 3D Morphable Model. This method can reconstruct a facial shape from a variety of images which include different lighting and face orientation, as long as facial feature points can be detected. However, representation quality of the result depends on the number of 3D model resolution.
野外环境下的三维人脸形状重建是计算机视觉和计算机视觉领域的一个重要研究课题。这是因为它可以应用于很多产品,比如3DCG视频游戏和人脸识别。基于三维模型的面部形状重建技术是目前最流行的三维面部形状重建技术之一。该方法利用主成分分析计算的三维人脸模型来逼近人脸形状。[Blanz and Vetter 1999]通过将单个人脸图像输入的人脸特征点拟合到模板三维人脸模型3D Morphable model的顶点,进行了三维人脸重建。只要能够检测到面部特征点,该方法可以从包含不同光照和面部方向的各种图像中重建面部形状。然而,结果的表示质量取决于3D模型的分辨率。
{"title":"3D facial geometry reconstruction using patch database","authors":"Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima","doi":"10.1145/2945078.2945102","DOIUrl":"https://doi.org/10.1145/2945078.2945102","url":null,"abstract":"3D facial shape reconstruction in the wild environments is an important research task in the field of CG and CV. This is because it can be applied to a lot of products, such as 3DCG video games and face recognition. One of the most popular 3D facial shape reconstruction techniques is 3D Model-based approach. This approach approximates a facial shape by using 3D face model, which is calculated by principal component analysis. [Blanz and Vetter 1999] performed a 3D facial reconstruction by fitting points from facial feature points of an input of single facial image to vertex of template 3D facial model named 3D Morphable Model. This method can reconstruct a facial shape from a variety of images which include different lighting and face orientation, as long as facial feature points can be detected. However, representation quality of the result depends on the number of 3D model resolution.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129711362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intuitive editing of material appearance 直观编辑材料外观
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945141
A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá
Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this work, we develop a novel methodology for intuitive and predictable editing of captured BRDF data, which allows for artistic creation of plausible material appearances, bypassing the difficulty of acquiring novel samples. We synthesize novel materials, and extend the existing MERL dataset [Matusik et al. 2003] up to 400 mathematically valid BRDFs. We design a large-scale experiment with 400 participants, gathering 56000 ratings about the perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals that map the high-level perceptual attributes to an underlying PCA-based representation of BRDFs. We show how our approach allows for intuitive edits of a wide range of visual properties, and demonstrate through a user study that our functionals are excellent predictors of the perceived attributes of appearance, enabling predictable editing with our framework.
在过去的几年中,人们提出了许多不同的测量材料外观的技术。这些已经产生了大量的公共数据集,这些数据集已经被用于精确的、数据驱动的外观建模。然而,尽管这些数据集使我们在视觉外观上达到了前所未有的真实感水平,但编辑捕获的数据仍然是一个挑战。在这项工作中,我们开发了一种新颖的方法,用于对捕获的BRDF数据进行直观和可预测的编辑,该方法允许对看似合理的材料外观进行艺术创作,从而绕过获取新样本的困难。我们合成了新的材料,并将现有的MERL数据集[Matusik et al. 2003]扩展到400个数学上有效的brdf。我们设计了一个有400名参与者的大规模实验,收集了56000个关于感知属性的评分,这些属性最能描述我们扩展的材料数据集。使用这些评级,我们构建和训练径向基函数网络,作为将高级感知属性映射到底层基于pca的brdf表示的函数。我们展示了我们的方法如何允许对各种视觉属性进行直观的编辑,并通过用户研究证明,我们的功能是外观感知属性的优秀预测器,可以使用我们的框架进行可预测的编辑。
{"title":"Intuitive editing of material appearance","authors":"A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá","doi":"10.1145/2945078.2945141","DOIUrl":"https://doi.org/10.1145/2945078.2945141","url":null,"abstract":"Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this work, we develop a novel methodology for intuitive and predictable editing of captured BRDF data, which allows for artistic creation of plausible material appearances, bypassing the difficulty of acquiring novel samples. We synthesize novel materials, and extend the existing MERL dataset [Matusik et al. 2003] up to 400 mathematically valid BRDFs. We design a large-scale experiment with 400 participants, gathering 56000 ratings about the perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals that map the high-level perceptual attributes to an underlying PCA-based representation of BRDFs. We show how our approach allows for intuitive edits of a wide range of visual properties, and demonstrate through a user study that our functionals are excellent predictors of the perceived attributes of appearance, enabling predictable editing with our framework.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fiber-level model for predictive cloth rendering 用于预测布料渲染的纤维级模型
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945144
Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo
Rendering realistic fabrics is an active research area with many applications in computer graphics and other fields like textile design. Reproducing the appearance of cloth remains challenging due to the micro-structures found in textiles, and the complex light scattering patterns exhibited at such scales. Recent approaches have reached very realistic results, either by directly modeling the arrangement of the fibers [Schröder et al. 2011], or capturing the structure of small pieces of cloth using Computed Tomography scanners (CT) [Zhao et al. 2011]. However, there is still a need for predictive modeling of cloth appearance; existing methods either rely on manually-set parameter values, or use photographs of real pieces of cloth to guide appearance matching algorithms, often assuming certain simplifications such as considering circular or elliptical cross sections, or assuming an homogeneous volume density, that lead to very different appearances.
织物逼真渲染是一个活跃的研究领域,在计算机图形学和纺织品设计等其他领域有许多应用。由于纺织品中的微观结构,以及在这种尺度上展示的复杂的光散射模式,再现布料的外观仍然具有挑战性。最近的方法已经取得了非常现实的结果,要么直接模拟纤维的排列[Schröder等人,2011],要么使用计算机断层扫描(CT)捕捉小块布的结构[Zhao等人,2011]。然而,仍然需要布料外观的预测建模;现有的方法要么依赖于手动设置的参数值,要么使用真实布料的照片来指导外观匹配算法,通常假设某些简化,例如考虑圆形或椭圆形横截面,或者假设均匀的体积密度,从而导致非常不同的外观。
{"title":"A fiber-level model for predictive cloth rendering","authors":"Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo","doi":"10.1145/2945078.2945144","DOIUrl":"https://doi.org/10.1145/2945078.2945144","url":null,"abstract":"Rendering realistic fabrics is an active research area with many applications in computer graphics and other fields like textile design. Reproducing the appearance of cloth remains challenging due to the micro-structures found in textiles, and the complex light scattering patterns exhibited at such scales. Recent approaches have reached very realistic results, either by directly modeling the arrangement of the fibers [Schröder et al. 2011], or capturing the structure of small pieces of cloth using Computed Tomography scanners (CT) [Zhao et al. 2011]. However, there is still a need for predictive modeling of cloth appearance; existing methods either rely on manually-set parameter values, or use photographs of real pieces of cloth to guide appearance matching algorithms, often assuming certain simplifications such as considering circular or elliptical cross sections, or assuming an homogeneous volume density, that lead to very different appearances.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1