Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura
Realistic image synthesis is an important research goal in computer graphics. One important factor to achieve this goal is a bidirectional reflectance distribution function (BRDF) that mainly governs an appearance of an object. Many BRDF models have therefore been developed. A physically-based BRDF based on microfacet theory [Cook and Torrance 1982] is widely used in many applications since it can produce highly realistic images. The microfacetbased BRDF consists of three terms; a Fresnel, a normal distribution, and a geometric functions. There are many analytical and approximate models for each of these terms.
真实感图像合成是计算机图形学的一个重要研究方向。实现这一目标的一个重要因素是双向反射分布函数(BRDF),它主要控制物体的外观。因此,开发了许多BRDF模型。基于microfacet理论的物理BRDF [Cook and Torrance 1982]由于可以产生高度逼真的图像,因此被广泛应用于许多应用中。基于微面的BRDF包括三个术语;菲涅耳分布,正态分布和几何函数。每个术语都有许多解析模型和近似模型。
{"title":"Measuring microstructures using confocal laser scanning microscopy for estimating surface roughness","authors":"Y. Dobashi, Takashi Ijiri, Hideki Todo, Kei Iwasaki, Makoto Okabe, S. Nishimura","doi":"10.1145/2945078.2945106","DOIUrl":"https://doi.org/10.1145/2945078.2945106","url":null,"abstract":"Realistic image synthesis is an important research goal in computer graphics. One important factor to achieve this goal is a bidirectional reflectance distribution function (BRDF) that mainly governs an appearance of an object. Many BRDF models have therefore been developed. A physically-based BRDF based on microfacet theory [Cook and Torrance 1982] is widely used in many applications since it can produce highly realistic images. The microfacetbased BRDF consists of three terms; a Fresnel, a normal distribution, and a geometric functions. There are many analytical and approximate models for each of these terms.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115293695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee
Recently researchers have shown much interest in 3D projection mapping systems but relatively less work has been done to make the contents look realistic. Much work has been done for multi-projector blending, 3D projection mapping and multi-projector based large displays but existing color compensation based systems still suffer from contrast compression, color inconsistencies and inappropriate luminance over the three dimensional projection surface giving rise to an un-appealing appearance. Until now having a realistic result with projection mapping on 3D objects when compared with a similar original object still remains a challenge. In this paper, we present a framework that optimizes projected images using multiple projectors in order to achieve an appearance that looks close to a real object whose appearance is being regenerated by projection mapping.
{"title":"A method for realistic 3D projection mapping using multiple projectors","authors":"Bilal Ahmed, Jong Hun Lee, Yong Yi Lee, Junho Choi, Yong Hwi Kim, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945154","DOIUrl":"https://doi.org/10.1145/2945078.2945154","url":null,"abstract":"Recently researchers have shown much interest in 3D projection mapping systems but relatively less work has been done to make the contents look realistic. Much work has been done for multi-projector blending, 3D projection mapping and multi-projector based large displays but existing color compensation based systems still suffer from contrast compression, color inconsistencies and inappropriate luminance over the three dimensional projection surface giving rise to an un-appealing appearance. Until now having a realistic result with projection mapping on 3D objects when compared with a similar original object still remains a challenge. In this paper, we present a framework that optimizes projected images using multiple projectors in order to achieve an appearance that looks close to a real object whose appearance is being regenerated by projection mapping.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"14 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto
In this paper, we present a convenient method for camera calibration with arbitrary co-planar circle-pairs from one image. This method is based on the accurate recovery of the projected centers of the circle pairs using a closed-form algorithm.
{"title":"Camera calibration by recovering projected centers of circle pairs","authors":"Qian Chen, Haiyuan Wu, Shinichi Higashino, R. Sakamoto","doi":"10.1145/2945078.2945117","DOIUrl":"https://doi.org/10.1145/2945078.2945117","url":null,"abstract":"In this paper, we present a convenient method for camera calibration with arbitrary co-planar circle-pairs from one image. This method is based on the accurate recovery of the projected centers of the circle pairs using a closed-form algorithm.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124206586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro
In this paper we propose a real time 360° video stitching and streaming processing methodology focused on GPU. The solution creates a scalable solution for large resolutions, such as 4K and 8K per camera, and supports broadcasting solutions with cloud architectures. The methodology uses a group of deformable meshes, processed using OpenGL (GLSL) and the final image combine the inputs using a robust pixel shader. Moreover, the result can be streamed to a cloud service using h.264 encoding with nVEnc GPU encoding. Finally, we present some results.
{"title":"Real time 360° video stitching and streaming","authors":"Rodrigo Marques Almeida da Silva, B. Feijó, Pablo B. Gomes, Thiago Frensh, Daniel Monteiro","doi":"10.1145/2945078.2945148","DOIUrl":"https://doi.org/10.1145/2945078.2945148","url":null,"abstract":"In this paper we propose a real time 360° video stitching and streaming processing methodology focused on GPU. The solution creates a scalable solution for large resolutions, such as 4K and 8K per camera, and supports broadcasting solutions with cloud architectures. The methodology uses a group of deformable meshes, processed using OpenGL (GLSL) and the final image combine the inputs using a robust pixel shader. Moreover, the result can be streamed to a cloud service using h.264 encoding with nVEnc GPU encoding. Finally, we present some results.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121705267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.
{"title":"Real-time rendering of high-quality effects using multi-frame sampling","authors":"Daniel Limberger, J. Döllner","doi":"10.1145/2945078.2945157","DOIUrl":"https://doi.org/10.1145/2945078.2945157","url":null,"abstract":"In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124756286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a data-driven Bidirectional Scattering Distribution Function (BSDF) representation and a model-free technique that preserves the integrity of the original data and interpolates reflection as well as transmission functions for arbitrary materials. Our interpolation technique employs Radial Basis Functions (RBFs), Radial Basis Systems (RBSs) and displacement techniques to track peaks in the distribution. The proposed data-driven BSDF representation can be used to render arbitrary BSDFs and includes an efficient Monte Carlo importance sampling scheme. We show that our data-driven BSDF framework can be used to represent measured BSDFs that are visually plausible and demonstrably accurate.
{"title":"A data-driven BSDF framework","authors":"Murat Kurt, G. Ward, Nicolas Bonneel","doi":"10.1145/2945078.2945109","DOIUrl":"https://doi.org/10.1145/2945078.2945109","url":null,"abstract":"We present a data-driven Bidirectional Scattering Distribution Function (BSDF) representation and a model-free technique that preserves the integrity of the original data and interpolates reflection as well as transmission functions for arbitrary materials. Our interpolation technique employs Radial Basis Functions (RBFs), Radial Basis Systems (RBSs) and displacement techniques to track peaks in the distribution. The proposed data-driven BSDF representation can be used to render arbitrary BSDFs and includes an efficient Monte Carlo importance sampling scheme. We show that our data-driven BSDF framework can be used to represent measured BSDFs that are visually plausible and demonstrably accurate.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115121393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.
{"title":"Optimal LED selection for multispectral lighting reproduction","authors":"Chloe LeGendre, Xueming Yu, P. Debevec","doi":"10.1145/2945078.2945150","DOIUrl":"https://doi.org/10.1145/2945078.2945150","url":null,"abstract":"We demonstrate the sufficiency of using as few as five LEDs of distinct spectra for multispectral lighting reproduction and solve for the optimal set of five from 11 such commercially available LEDs. We leverage published spectral reflectance, illuminant, and camera spectral sensitivity datasets to show that two approaches of lighting reproduction, matching illuminant spectra directly and matching material color appearance observed by one or more cameras or a human observer, yield the same LED selections. Our proposed optimal set of five LEDs includes red, green, and blue with narrow emission spectra, along with white and amber with broader spectra.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129558720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima
3D facial shape reconstruction in the wild environments is an important research task in the field of CG and CV. This is because it can be applied to a lot of products, such as 3DCG video games and face recognition. One of the most popular 3D facial shape reconstruction techniques is 3D Model-based approach. This approach approximates a facial shape by using 3D face model, which is calculated by principal component analysis. [Blanz and Vetter 1999] performed a 3D facial reconstruction by fitting points from facial feature points of an input of single facial image to vertex of template 3D facial model named 3D Morphable Model. This method can reconstruct a facial shape from a variety of images which include different lighting and face orientation, as long as facial feature points can be detected. However, representation quality of the result depends on the number of 3D model resolution.
野外环境下的三维人脸形状重建是计算机视觉和计算机视觉领域的一个重要研究课题。这是因为它可以应用于很多产品,比如3DCG视频游戏和人脸识别。基于三维模型的面部形状重建技术是目前最流行的三维面部形状重建技术之一。该方法利用主成分分析计算的三维人脸模型来逼近人脸形状。[Blanz and Vetter 1999]通过将单个人脸图像输入的人脸特征点拟合到模板三维人脸模型3D Morphable model的顶点,进行了三维人脸重建。只要能够检测到面部特征点,该方法可以从包含不同光照和面部方向的各种图像中重建面部形状。然而,结果的表示质量取决于3D模型的分辨率。
{"title":"3D facial geometry reconstruction using patch database","authors":"Tsukasa Nozawa, Takuya Kato, Pavel A. Savkin, N. Nozawa, S. Morishima","doi":"10.1145/2945078.2945102","DOIUrl":"https://doi.org/10.1145/2945078.2945102","url":null,"abstract":"3D facial shape reconstruction in the wild environments is an important research task in the field of CG and CV. This is because it can be applied to a lot of products, such as 3DCG video games and face recognition. One of the most popular 3D facial shape reconstruction techniques is 3D Model-based approach. This approach approximates a facial shape by using 3D face model, which is calculated by principal component analysis. [Blanz and Vetter 1999] performed a 3D facial reconstruction by fitting points from facial feature points of an input of single facial image to vertex of template 3D facial model named 3D Morphable Model. This method can reconstruct a facial shape from a variety of images which include different lighting and face orientation, as long as facial feature points can be detected. However, representation quality of the result depends on the number of 3D model resolution.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"8 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129711362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá
Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this work, we develop a novel methodology for intuitive and predictable editing of captured BRDF data, which allows for artistic creation of plausible material appearances, bypassing the difficulty of acquiring novel samples. We synthesize novel materials, and extend the existing MERL dataset [Matusik et al. 2003] up to 400 mathematically valid BRDFs. We design a large-scale experiment with 400 participants, gathering 56000 ratings about the perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals that map the high-level perceptual attributes to an underlying PCA-based representation of BRDFs. We show how our approach allows for intuitive edits of a wide range of visual properties, and demonstrate through a user study that our functionals are excellent predictors of the perceived attributes of appearance, enabling predictable editing with our framework.
在过去的几年中,人们提出了许多不同的测量材料外观的技术。这些已经产生了大量的公共数据集,这些数据集已经被用于精确的、数据驱动的外观建模。然而,尽管这些数据集使我们在视觉外观上达到了前所未有的真实感水平,但编辑捕获的数据仍然是一个挑战。在这项工作中,我们开发了一种新颖的方法,用于对捕获的BRDF数据进行直观和可预测的编辑,该方法允许对看似合理的材料外观进行艺术创作,从而绕过获取新样本的困难。我们合成了新的材料,并将现有的MERL数据集[Matusik et al. 2003]扩展到400个数学上有效的brdf。我们设计了一个有400名参与者的大规模实验,收集了56000个关于感知属性的评分,这些属性最能描述我们扩展的材料数据集。使用这些评级,我们构建和训练径向基函数网络,作为将高级感知属性映射到底层基于pca的brdf表示的函数。我们展示了我们的方法如何允许对各种视觉属性进行直观的编辑,并通过用户研究证明,我们的功能是外观感知属性的优秀预测器,可以使用我们的框架进行可预测的编辑。
{"title":"Intuitive editing of material appearance","authors":"A. Serrano, D. Gutierrez, K. Myszkowski, H. Seidel, B. Masiá","doi":"10.1145/2945078.2945141","DOIUrl":"https://doi.org/10.1145/2945078.2945141","url":null,"abstract":"Many different techniques for measuring material appearance have been proposed in the last few years. These have produced large public datasets, which have been used for accurate, data-driven appearance modeling. However, although these datasets have allowed us to reach an unprecedented level of realism in visual appearance, editing the captured data remains a challenge. In this work, we develop a novel methodology for intuitive and predictable editing of captured BRDF data, which allows for artistic creation of plausible material appearances, bypassing the difficulty of acquiring novel samples. We synthesize novel materials, and extend the existing MERL dataset [Matusik et al. 2003] up to 400 mathematically valid BRDFs. We design a large-scale experiment with 400 participants, gathering 56000 ratings about the perceptual attributes that best describe our extended dataset of materials. Using these ratings, we build and train networks of radial basis functions to act as functionals that map the high-level perceptual attributes to an underlying PCA-based representation of BRDFs. We show how our approach allows for intuitive edits of a wide range of visual properties, and demonstrate through a user study that our functionals are excellent predictors of the perceived attributes of appearance, enabling predictable editing with our framework.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132610848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo
Rendering realistic fabrics is an active research area with many applications in computer graphics and other fields like textile design. Reproducing the appearance of cloth remains challenging due to the micro-structures found in textiles, and the complex light scattering patterns exhibited at such scales. Recent approaches have reached very realistic results, either by directly modeling the arrangement of the fibers [Schröder et al. 2011], or capturing the structure of small pieces of cloth using Computed Tomography scanners (CT) [Zhao et al. 2011]. However, there is still a need for predictive modeling of cloth appearance; existing methods either rely on manually-set parameter values, or use photographs of real pieces of cloth to guide appearance matching algorithms, often assuming certain simplifications such as considering circular or elliptical cross sections, or assuming an homogeneous volume density, that lead to very different appearances.
{"title":"A fiber-level model for predictive cloth rendering","authors":"Carlos Aliaga, C. Castillo, D. Gutierrez, M. Otaduy, Jorge López-Moreno, A. Jarabo","doi":"10.1145/2945078.2945144","DOIUrl":"https://doi.org/10.1145/2945078.2945144","url":null,"abstract":"Rendering realistic fabrics is an active research area with many applications in computer graphics and other fields like textile design. Reproducing the appearance of cloth remains challenging due to the micro-structures found in textiles, and the complex light scattering patterns exhibited at such scales. Recent approaches have reached very realistic results, either by directly modeling the arrangement of the fibers [Schröder et al. 2011], or capturing the structure of small pieces of cloth using Computed Tomography scanners (CT) [Zhao et al. 2011]. However, there is still a need for predictive modeling of cloth appearance; existing methods either rely on manually-set parameter values, or use photographs of real pieces of cloth to guide appearance matching algorithms, often assuming certain simplifications such as considering circular or elliptical cross sections, or assuming an homogeneous volume density, that lead to very different appearances.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133875686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}