Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.
{"title":"Radiometric Compensation through Inverse Light Transport","authors":"Gordon Wetzstein, O. Bimber","doi":"10.1109/PG.2007.47","DOIUrl":"https://doi.org/10.1109/PG.2007.47","url":null,"abstract":"Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115857998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oliver Wang, Jonathan Finger, Qingxiong Yang, James Davis, Ruigang Yang
Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.
{"title":"Automatic Natural Video Matting with Depth","authors":"Oliver Wang, Jonathan Finger, Qingxiong Yang, James Davis, Ruigang Yang","doi":"10.1109/PG.2007.52","DOIUrl":"https://doi.org/10.1109/PG.2007.52","url":null,"abstract":"Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kun Zhou, Qiming Hou, Minmin Gong, John M. Snyder, B. Guo, H. Shum
We describe a new, analytic approximation to the airlight integral from scattering media whose density is modeled as a sum of Gaussians. The approximation supports real-time rendering of inhomogeneous media including their shadowing and scattering effects. For each Gaussian, this approximation samples the scattering integrand at the projection of its center along the view ray but models attenuation and shadowing with respect to the other Gaussians by integrating density along the fixed path from light source to 3D center to view point. Our method handles isotropic, single-scattering media illuminated by point light sources or low-frequency lighting environments. We also generalize models for reflectance of surfaces from constant-density to inhomogeneous media, using simple optical depth averaging in the direction of the light source or all around the receiver point. Our real-time renderer is incorporated into a system for real-time design and preview of realistic animated fog, steam, or smoke.
{"title":"Fogshop: Real-Time Design and Rendering of Inhomogeneous, Single-Scattering Media","authors":"Kun Zhou, Qiming Hou, Minmin Gong, John M. Snyder, B. Guo, H. Shum","doi":"10.1109/PG.2007.48","DOIUrl":"https://doi.org/10.1109/PG.2007.48","url":null,"abstract":"We describe a new, analytic approximation to the airlight integral from scattering media whose density is modeled as a sum of Gaussians. The approximation supports real-time rendering of inhomogeneous media including their shadowing and scattering effects. For each Gaussian, this approximation samples the scattering integrand at the projection of its center along the view ray but models attenuation and shadowing with respect to the other Gaussians by integrating density along the fixed path from light source to 3D center to view point. Our method handles isotropic, single-scattering media illuminated by point light sources or low-frequency lighting environments. We also generalize models for reflectance of surfaces from constant-density to inhomogeneous media, using simple optical depth averaging in the direction of the light source or all around the receiver point. Our real-time renderer is incorporated into a system for real-time design and preview of realistic animated fog, steam, or smoke.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"384 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel PRT-based method that uses precomputed visibility cuts for interactive relighting with all-frequency environment maps and arbitrary dynamic BRDFs. Our method is inspired by the recent Lightcuts approach [24] and we parameterize distant environment lighting onto uniformly distributed sample points over the sphere. Using a binary tree structure of the points, we precompute and approximate each vertex's visibility function into clusters that we call the precomputed visibility cuts. These cuts are iteratively selected with bounded approximation error and confined cluster size. At run-time, a GPU-based relighting algorithm quickly computes the view-dependent shading color by accessing a dynamically built light tree, the precomputed visibility cuts, and a direct sampling of an arbitrary BRDF using each visibility cluster's average direction and the dynamic view direction. Compared to existing PRT techniques, our method guarantees uniform sampling of the lighting, requires no precomputed BRDF data, and can be easily extended to handle one-bounce glossy indirect transfer effects in real-time.
{"title":"Precomputed Visibility Cuts for Interactive Relighting with Dynamic BRDFs","authors":"O. Åkerlund, M. Unger, Rui Wang","doi":"10.1109/PG.2007.30","DOIUrl":"https://doi.org/10.1109/PG.2007.30","url":null,"abstract":"This paper presents a novel PRT-based method that uses precomputed visibility cuts for interactive relighting with all-frequency environment maps and arbitrary dynamic BRDFs. Our method is inspired by the recent Lightcuts approach [24] and we parameterize distant environment lighting onto uniformly distributed sample points over the sphere. Using a binary tree structure of the points, we precompute and approximate each vertex's visibility function into clusters that we call the precomputed visibility cuts. These cuts are iteratively selected with bounded approximation error and confined cluster size. At run-time, a GPU-based relighting algorithm quickly computes the view-dependent shading color by accessing a dynamically built light tree, the precomputed visibility cuts, and a direct sampling of an arbitrary BRDF using each visibility cluster's average direction and the dynamic view direction. Compared to existing PRT techniques, our method guarantees uniform sampling of the lighting, requires no precomputed BRDF data, and can be easily extended to handle one-bounce glossy indirect transfer effects in real-time.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121067688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.
{"title":"Model Composition from Interchangeable Components","authors":"Vladislav Kraevoy, D. Julius, A. Sheffer","doi":"10.1109/PG.2007.40","DOIUrl":"https://doi.org/10.1109/PG.2007.40","url":null,"abstract":"Following the increasing demand to make the creation and manipulation of 3D geometry simpler and more accessible, we introduce a modeling approach that allows even novice users to create sophisticated models in minutes. Our approach is based on the observation that in many modeling settings users create models which belong to a small set of model classes, such as humans or quadrupeds. The models within each class typically share a common component structure. Following this observation, we introduce a modeling system which utilizes this common component structure allowing users to create new models by shuffling interchangeable components between existing models. To enable shuffling, we develop a method for computing a compatible segmentation of input models into meaningful, interchangeable components. Using this segmentation our system lets users create new models with a few mouse clicks, in a fraction of the time required by previous composition techniques. We demonstrate that the shuffling paradigm allows for easy and fast creation of a rich geometric content.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128609807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We complete and bring together two pairs of surface constructions that use polynomial pieces of degree (3,3) to associate a smooth surface with a mesh. The two pairs complement each other in that one extends the subdivisionmodeling paradigm, the other the NURBS patch approach to free-form modeling. Both Catmull-Clark [3] and polar subdivision [7] generalize bi-cubic spline subdivision. Together, they form a powerful combination for smooth object design: while Catmull-Clark subdivision is more suitable where few facets join, polar subdivision nicely models regions where many facets join, as when capping extruded features. We show how to easily combine the meshes of these two generalizations of bi-cubic spline subdivision. A related but different generalization of bi-cubic splines is to model non-tensor-product configurations by a finite set of smoothly connected bi-cubic patches. PCCM [12] does so for layouts where Catmull-Clark would apply. We show that a single NURBS patch can be used where polar subdivision would be applied. This spline is singularly parametrized, but, using a novel technique, we show that the surface is C1 and has bounded curvatures.
{"title":"Extending Catmull-Clark Subdivision and PCCM with Polar Structures","authors":"A. Myles, K. Karčiauskas, J. Peters","doi":"10.1109/PG.2007.11","DOIUrl":"https://doi.org/10.1109/PG.2007.11","url":null,"abstract":"We complete and bring together two pairs of surface constructions that use polynomial pieces of degree (3,3) to associate a smooth surface with a mesh. The two pairs complement each other in that one extends the subdivisionmodeling paradigm, the other the NURBS patch approach to free-form modeling. Both Catmull-Clark [3] and polar subdivision [7] generalize bi-cubic spline subdivision. Together, they form a powerful combination for smooth object design: while Catmull-Clark subdivision is more suitable where few facets join, polar subdivision nicely models regions where many facets join, as when capping extruded features. We show how to easily combine the meshes of these two generalizations of bi-cubic spline subdivision. A related but different generalization of bi-cubic splines is to model non-tensor-product configurations by a finite set of smoothly connected bi-cubic patches. PCCM [12] does so for layouts where Catmull-Clark would apply. We show that a single NURBS patch can be used where polar subdivision would be applied. This spline is singularly parametrized, but, using a novel technique, we show that the surface is C1 and has bounded curvatures.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117318236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Skinned Mesh Animation (SMA) well approximates a mesh animation with extracted bones and their transformations. However, unlike skeleton, bones in SMA are not organized in hierarchies, thus they need mesh dependent translation vectors which prevent other sources of motion (i.e. skeletal animations, MoCAP, SMAs etc) from being applied to the skinned mesh. In this paper, we propose a new and fast method to transplant motion to skinned meshes. By efficiently solving a linear least-squares system, we can compute new translation vectors which enable the motion to work on the skinned mesh. Based on the same idea, we have also devised a SMA editing tool which allows users to edit frames of the SMA interactively. Furthermore, the editing can be propagated to all subsequent frames.
{"title":"Transplanting and Editing Animations on Skinned Meshes","authors":"Yuntao Jia, Wei-Wen Feng, Yizhou Yu","doi":"10.1109/PG.2007.41","DOIUrl":"https://doi.org/10.1109/PG.2007.41","url":null,"abstract":"Skinned Mesh Animation (SMA) well approximates a mesh animation with extracted bones and their transformations. However, unlike skeleton, bones in SMA are not organized in hierarchies, thus they need mesh dependent translation vectors which prevent other sources of motion (i.e. skeletal animations, MoCAP, SMAs etc) from being applied to the skinned mesh. In this paper, we propose a new and fast method to transplant motion to skinned meshes. By efficiently solving a linear least-squares system, we can compute new translation vectors which enable the motion to work on the skinned mesh. Based on the same idea, we have also devised a SMA editing tool which allows users to edit frames of the SMA interactively. Furthermore, the editing can be propagated to all subsequent frames.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117048598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new geometry-based finite difference method for a fast and reliable detection of perceptually salient curvature extrema on surfaces approximated by dense triangle meshes is proposed. The foundations of the method are two simple curvature and curvature derivative formulas overlooked in modern differential geometry textbooks and seemingly new observation about inversion-invariant local surface-based differential forms.
{"title":"Fast and Faithful Geometric Algorithm for Detecting Crest Lines on Meshes","authors":"S. Yoshizawa, A. Belyaev, H. Yokota, H. Seidel","doi":"10.1109/PG.2007.24","DOIUrl":"https://doi.org/10.1109/PG.2007.24","url":null,"abstract":"A new geometry-based finite difference method for a fast and reliable detection of perceptually salient curvature extrema on surfaces approximated by dense triangle meshes is proposed. The foundations of the method are two simple curvature and curvature derivative formulas overlooked in modern differential geometry textbooks and seemingly new observation about inversion-invariant local surface-based differential forms.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115826513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a practical, high-quality, hardware-accelerated volume rendering approach including scattering, environment mapping, and ambient occlusion. We examine the application of stochastic raytracing techniques for volume rendering and provide a fast GPU-based prototype implementation. In addition, we propose a simple phenomenological scattering model, closely related to the Phong illumination model that many artists are familiar with. We demonstrate our technique being capable of producing convincing images, yet flexible enough for digital productions in practice.
{"title":"GPU-Based Monte-Carlo Volume Raycasting","authors":"Christof Rezk Salama","doi":"10.1109/PG.2007.33","DOIUrl":"https://doi.org/10.1109/PG.2007.33","url":null,"abstract":"This paper presents a practical, high-quality, hardware-accelerated volume rendering approach including scattering, environment mapping, and ambient occlusion. We examine the application of stochastic raytracing techniques for volume rendering and provide a fast GPU-based prototype implementation. In addition, we propose a simple phenomenological scattering model, closely related to the Phong illumination model that many artists are familiar with. We demonstrate our technique being capable of producing convincing images, yet flexible enough for digital productions in practice.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116346702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developable surfaces have many desired properties in manufacturing process. Since most existing CAD systems utilize parametric surfaces as the design primitive, there is a great demand in industry to convert a parametric surface within a prescribed global error bound into developable patches. In this work we propose a simple and efficient solution to approximate a general parametric surface with a minimum set of C0-joint developable strips. The key contribution of the proposed algorithm is that, several global optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.
{"title":"Developable Strip Approximation of Parametric Surfaces with Global Error Bounds","authors":"Yong-Jin Liu, Yu-Kun Lai, Shimin Hu","doi":"10.1109/PG.2007.13","DOIUrl":"https://doi.org/10.1109/PG.2007.13","url":null,"abstract":"Developable surfaces have many desired properties in manufacturing process. Since most existing CAD systems utilize parametric surfaces as the design primitive, there is a great demand in industry to convert a parametric surface within a prescribed global error bound into developable patches. In this work we propose a simple and efficient solution to approximate a general parametric surface with a minimum set of C0-joint developable strips. The key contribution of the proposed algorithm is that, several global optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130161655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}