Yuki Morimoto, Masayuki Tanaka, R. Tsuruno, Kiyoshi Tomimatsu
This paper describes a method for simulating and visualizing dyeing based on weave patterns and the physical parameters of the threads and the dye. We apply Fick's second law with a variable diffusion coefficient. We calculate the diffusion coefficient using the porosity, tortuosity, and the dye concentration based on the physical chemistry of dyeing. The tortuosity of the channel was incorporated in order to consider the effect of the weave patterns on diffusion. In this model, the total mass is conserved. We describe the cloth model using a two-layered cellular model that includes the essential factors required for representing the weft and warp. Our model also includes a simple dyeing technique that produces dyeing patterns by interrupting the diffusion of the dye in a cloth using a press. The results obtained using our model demonstrate that it is capable of modeling many of the characteristics of dyeing.
{"title":"Visualization of Dyeing based on Diffusion and Adsorption Theories","authors":"Yuki Morimoto, Masayuki Tanaka, R. Tsuruno, Kiyoshi Tomimatsu","doi":"10.1109/PG.2007.51","DOIUrl":"https://doi.org/10.1109/PG.2007.51","url":null,"abstract":"This paper describes a method for simulating and visualizing dyeing based on weave patterns and the physical parameters of the threads and the dye. We apply Fick's second law with a variable diffusion coefficient. We calculate the diffusion coefficient using the porosity, tortuosity, and the dye concentration based on the physical chemistry of dyeing. The tortuosity of the channel was incorporated in order to consider the effect of the weave patterns on diffusion. In this model, the total mass is conserved. We describe the cloth model using a two-layered cellular model that includes the essential factors required for representing the weft and warp. Our model also includes a simple dyeing technique that produces dyeing patterns by interrupting the diffusion of the dye in a cloth using a press. The results obtained using our model demonstrate that it is capable of modeling many of the characteristics of dyeing.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123359526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel PRT-based method that uses precomputed visibility cuts for interactive relighting with all-frequency environment maps and arbitrary dynamic BRDFs. Our method is inspired by the recent Lightcuts approach [24] and we parameterize distant environment lighting onto uniformly distributed sample points over the sphere. Using a binary tree structure of the points, we precompute and approximate each vertex's visibility function into clusters that we call the precomputed visibility cuts. These cuts are iteratively selected with bounded approximation error and confined cluster size. At run-time, a GPU-based relighting algorithm quickly computes the view-dependent shading color by accessing a dynamically built light tree, the precomputed visibility cuts, and a direct sampling of an arbitrary BRDF using each visibility cluster's average direction and the dynamic view direction. Compared to existing PRT techniques, our method guarantees uniform sampling of the lighting, requires no precomputed BRDF data, and can be easily extended to handle one-bounce glossy indirect transfer effects in real-time.
{"title":"Precomputed Visibility Cuts for Interactive Relighting with Dynamic BRDFs","authors":"O. Åkerlund, M. Unger, Rui Wang","doi":"10.1109/PG.2007.30","DOIUrl":"https://doi.org/10.1109/PG.2007.30","url":null,"abstract":"This paper presents a novel PRT-based method that uses precomputed visibility cuts for interactive relighting with all-frequency environment maps and arbitrary dynamic BRDFs. Our method is inspired by the recent Lightcuts approach [24] and we parameterize distant environment lighting onto uniformly distributed sample points over the sphere. Using a binary tree structure of the points, we precompute and approximate each vertex's visibility function into clusters that we call the precomputed visibility cuts. These cuts are iteratively selected with bounded approximation error and confined cluster size. At run-time, a GPU-based relighting algorithm quickly computes the view-dependent shading color by accessing a dynamically built light tree, the precomputed visibility cuts, and a direct sampling of an arbitrary BRDF using each visibility cluster's average direction and the dynamic view direction. Compared to existing PRT techniques, our method guarantees uniform sampling of the lighting, requires no precomputed BRDF data, and can be easily extended to handle one-bounce glossy indirect transfer effects in real-time.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121067688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.
{"title":"Model Composition from Interchangeable Components","authors":"Vladislav Kraevoy, D. Julius, A. Sheffer","doi":"10.1109/PG.2007.40","DOIUrl":"https://doi.org/10.1109/PG.2007.40","url":null,"abstract":"Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128609807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Zhou, Qiming Hou, Minmin Gong, John M. Snyder, B. Guo, H. Shum
We describe a new, analytic approximation to the airlight integral from scattering media whose density is modeled as a sum of Gaussians. The approximation supports real-time rendering of inhomogeneous media including their shadowing and scattering effects. For each Gaussian, this approximation samples the scattering integrand at the projection of its center along the view ray but models attenuation and shadowing with respect to the other Gaussians by integrating density along the fixed path from light source to 3D center to view point. Our method handles isotropic, single-scattering media illuminated by point light sources or low-frequency lighting environments. We also generalize models for reflectance of surfaces from constant-density to inhomogeneous media, using simple optical depth averaging in the direction of the light source or all around the receiver point. Our real-time renderer is incorporated into a system for real-time design and preview of realistic animated fog, steam, or smoke.
{"title":"Fogshop: Real-Time Design and Rendering of Inhomogeneous, Single-Scattering Media","authors":"Kun Zhou, Qiming Hou, Minmin Gong, John M. Snyder, B. Guo, H. Shum","doi":"10.1109/PG.2007.48","DOIUrl":"https://doi.org/10.1109/PG.2007.48","url":null,"abstract":"We describe a new, analytic approximation to the airlight integral from scattering media whose density is modeled as a sum of Gaussians. The approximation supports real-time rendering of inhomogeneous media including their shadowing and scattering effects. For each Gaussian, this approximation samples the scattering integrand at the projection of its center along the view ray but models attenuation and shadowing with respect to the other Gaussians by integrating density along the fixed path from light source to 3D center to view point. Our method handles isotropic, single-scattering media illuminated by point light sources or low-frequency lighting environments. We also generalize models for reflectance of surfaces from constant-density to inhomogeneous media, using simple optical depth averaging in the direction of the light source or all around the receiver point. Our real-time renderer is incorporated into a system for real-time design and preview of realistic animated fog, steam, or smoke.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oliver Wang, Jonathan Finger, Qingxiong Yang, James Davis, Ruigang Yang
Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.
{"title":"Automatic Natural Video Matting with Depth","authors":"Oliver Wang, Jonathan Finger, Qingxiong Yang, James Davis, Ruigang Yang","doi":"10.1109/PG.2007.52","DOIUrl":"https://doi.org/10.1109/PG.2007.52","url":null,"abstract":"Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new geometry-based finite difference method for a fast and reliable detection of perceptually salient curvature extrema on surfaces approximated by dense triangle meshes is proposed. The foundations of the method are two simple curvature and curvature derivative formulas overlooked in modern differential geometry textbooks and seemingly new observation about inversion-invariant local surface-based differential forms.
{"title":"Fast and Faithful Geometric Algorithm for Detecting Crest Lines on Meshes","authors":"S. Yoshizawa, A. Belyaev, H. Yokota, H. Seidel","doi":"10.1109/PG.2007.24","DOIUrl":"https://doi.org/10.1109/PG.2007.24","url":null,"abstract":"A new geometry-based finite difference method for a fast and reliable detection of perceptually salient curvature extrema on surfaces approximated by dense triangle meshes is proposed. The foundations of the method are two simple curvature and curvature derivative formulas overlooked in modern differential geometry textbooks and seemingly new observation about inversion-invariant local surface-based differential forms.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115826513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a practical, high-quality, hardware-accelerated volume rendering approach including scattering, environment mapping, and ambient occlusion. We examine the application of stochastic raytracing techniques for volume rendering and provide a fast GPU-based prototype implementation. In addition, we propose a simple phenomenological scattering model, closely related to the Phong illumination model that many artists are familiar with. We demonstrate our technique being capable of producing convincing images, yet flexible enough for digital productions in practice.
{"title":"GPU-Based Monte-Carlo Volume Raycasting","authors":"Christof Rezk Salama","doi":"10.1109/PG.2007.33","DOIUrl":"https://doi.org/10.1109/PG.2007.33","url":null,"abstract":"This paper presents a practical, high-quality, hardware-accelerated volume rendering approach including scattering, environment mapping, and ambient occlusion. We examine the application of stochastic raytracing techniques for volume rendering and provide a fast GPU-based prototype implementation. In addition, we propose a simple phenomenological scattering model, closely related to the Phong illumination model that many artists are familiar with. We demonstrate our technique being capable of producing convincing images, yet flexible enough for digital productions in practice.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116346702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mesh editing methods based on differential surface representations are known for their efficiency and ease of implementation. For reconstruction from such representations, local frames have to be determined which is a nonlinear problem. In linear approximations frames can either degenerate or become inconsistent with the geometry. Both results in contra-intuitive deformations. Existing nonlinear approaches, however, are comparatively slow and considerably more complex. In this paper we present a differential representation that implicitly enforces orthogonal and geometry consistent frames while allowing a simple and efficient implementation. In particular, it enforces conformal surface deformations preserving local texture features.
{"title":"Simple and Efficient Mesh Editing with Consistent Local Frames","authors":"N. Paries, P. Degener, R. Klein","doi":"10.1109/PG.2007.43","DOIUrl":"https://doi.org/10.1109/PG.2007.43","url":null,"abstract":"Mesh editing methods based on differential surface representations are known for their efficiency and ease of implementation. For reconstruction from such representations, local frames have to be determined which is a nonlinear problem. In linear approximations frames can either degenerate or become inconsistent with the geometry. Both results in contra-intuitive deformations. Existing nonlinear approaches, however, are comparatively slow and considerably more complex. In this paper we present a differential representation that implicitly enforces orthogonal and geometry consistent frames while allowing a simple and efficient implementation. In particular, it enforces conformal surface deformations preserving local texture features.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114892473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developable surfaces have many desired properties in manufacturing process. Since most existing CAD systems utilize parametric surfaces as the design primitive, there is a great demand in industry to convert a parametric surface within a prescribed global error bound into developable patches. In this work we propose a simple and efficient solution to approximate a general parametric surface with a minimum set of C0-joint developable strips. The key contribution of the proposed algorithm is that, several global optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.
{"title":"Developable Strip Approximation of Parametric Surfaces with Global Error Bounds","authors":"Yong-Jin Liu, Yu-Kun Lai, Shimin Hu","doi":"10.1109/PG.2007.13","DOIUrl":"https://doi.org/10.1109/PG.2007.13","url":null,"abstract":"Developable surfaces have many desired properties in manufacturing process. Since most existing CAD systems utilize parametric surfaces as the design primitive, there is a great demand in industry to convert a parametric surface within a prescribed global error bound into developable patches. In this work we propose a simple and efficient solution to approximate a general parametric surface with a minimum set of C0-joint developable strips. The key contribution of the proposed algorithm is that, several global optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130161655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an appearance-based user interface for artists to efficiently design customized image-based lighting environments. 1 Our approach avoids typical iterations of parameter editing, rendering, and confirmation by providing a set of intuitive user interfaces for directly specifying the desired appearance of the model in the scene. Then the system automatically creates the lighting environment by solving the inverse shading problem. To obtain a realistic image, all-frequency lighting is used with a spherical radial basis function (SRBF) representation. Rendering is performed using precomputed radiance transfer (PRT) to achieve a responsive speed. User experiments demonstrated the effectiveness of the proposed system compared to a previous approach.
{"title":"Illumination Brush: Interactive Design of All-Frequency Lighting","authors":"Makoto Okabe, Y. Matsushita, Li Shen, T. Igarashi","doi":"10.1109/PG.2007.9","DOIUrl":"https://doi.org/10.1109/PG.2007.9","url":null,"abstract":"We present an appearance-based user interface for artists to efficiently design customized image-based lighting environments. 1 Our approach avoids typical iterations of parameter editing, rendering, and confirmation by providing a set of intuitive user interfaces for directly specifying the desired appearance of the model in the scene. Then the system automatically creates the lighting environment by solving the inverse shading problem. To obtain a realistic image, all-frequency lighting is used with a spherical radial basis function (SRBF) representation. Rendering is performed using precomputed radiance transfer (PRT) to achieve a responsive speed. User experiments demonstrated the effectiveness of the proposed system compared to a previous approach.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130312656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}