We present an appearance-based user interface for artists to efficiently design customized image-based lighting environments. 1 Our approach avoids typical iterations of parameter editing, rendering, and confirmation by providing a set of intuitive user interfaces for directly specifying the desired appearance of the model in the scene. Then the system automatically creates the lighting environment by solving the inverse shading problem. To obtain a realistic image, all-frequency lighting is used with a spherical radial basis function (SRBF) representation. Rendering is performed using precomputed radiance transfer (PRT) to achieve a responsive speed. User experiments demonstrated the effectiveness of the proposed system compared to a previous approach.
{"title":"Illumination Brush: Interactive Design of All-Frequency Lighting","authors":"Makoto Okabe, Y. Matsushita, Li Shen, T. Igarashi","doi":"10.1109/PG.2007.9","DOIUrl":"https://doi.org/10.1109/PG.2007.9","url":null,"abstract":"We present an appearance-based user interface for artists to efficiently design customized image-based lighting environments. 1 Our approach avoids typical iterations of parameter editing, rendering, and confirmation by providing a set of intuitive user interfaces for directly specifying the desired appearance of the model in the scene. Then the system automatically creates the lighting environment by solving the inverse shading problem. To obtain a realistic image, all-frequency lighting is used with a spherical radial basis function (SRBF) representation. Rendering is performed using precomputed radiance transfer (PRT) to achieve a responsive speed. User experiments demonstrated the effectiveness of the proposed system compared to a previous approach.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130312656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mesh editing methods based on differential surface representations are known for their efficiency and ease of implementation. For reconstruction from such representations, local frames have to be determined which is a nonlinear problem. In linear approximations frames can either degenerate or become inconsistent with the geometry. Both results in contra-intuitive deformations. Existing nonlinear approaches, however, are comparatively slow and considerably more complex. In this paper we present a differential representation that implicitly enforces orthogonal and geometry consistent frames while allowing a simple and efficient implementation. In particular, it enforces conformal surface deformations preserving local texture features.
{"title":"Simple and Efficient Mesh Editing with Consistent Local Frames","authors":"N. Paries, P. Degener, R. Klein","doi":"10.1109/PG.2007.43","DOIUrl":"https://doi.org/10.1109/PG.2007.43","url":null,"abstract":"Mesh editing methods based on differential surface representations are known for their efficiency and ease of implementation. For reconstruction from such representations, local frames have to be determined which is a nonlinear problem. In linear approximations frames can either degenerate or become inconsistent with the geometry. Both results in contra-intuitive deformations. Existing nonlinear approaches, however, are comparatively slow and considerably more complex. In this paper we present a differential representation that implicitly enforces orthogonal and geometry consistent frames while allowing a simple and efficient implementation. In particular, it enforces conformal surface deformations preserving local texture features.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114892473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce QAS, an efficient quadratic approximation of subdivision surfaces which offers a very close appearance compared to the true subdivision surface but avoids recursion, providing at least one order of magnitude faster rendering. QAS uses enriched polygons, equipped with edge vertices, and replaces them on-the-fly with low degree polynomials for interpolating positions and normals. By systematically projecting the vertices of the input coarse mesh at their limit position on the subdivision surface, the visual quality of the approximation is good enough for imposing only a single subdivision step, followed by our patch fitting, which allows real-time performances for million polygons output. Additionally, the parametric nature of the approximation offers an efficient adaptive sampling for rendering and displacement mapping. Last, the hexagonal support associated to each coarse triangle is adapted to geometry processors.
{"title":"QAS: Real-Time Quadratic Approximation of Subdivision Surfaces","authors":"T. Boubekeur, C. Schlick","doi":"10.1109/PG.2007.20","DOIUrl":"https://doi.org/10.1109/PG.2007.20","url":null,"abstract":"We introduce QAS, an efficient quadratic approximation of subdivision surfaces which offers a very close appearance compared to the true subdivision surface but avoids recursion, providing at least one order of magnitude faster rendering. QAS uses enriched polygons, equipped with edge vertices, and replaces them on-the-fly with low degree polynomials for interpolating positions and normals. By systematically projecting the vertices of the input coarse mesh at their limit position on the subdivision surface, the visual quality of the approximation is good enough for imposing only a single subdivision step, followed by our patch fitting, which allows real-time performances for million polygons output. Additionally, the parametric nature of the approximation offers an efficient adaptive sampling for rendering and displacement mapping. Last, the hexagonal support associated to each coarse triangle is adapted to geometry processors.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123141729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Standard density estimation approaches suffer from visible bias due to low-pass filtering of the lighting function. Therefore, most photon density estimation methods have been used primarily with inefficient Monte Carlo final gathering to achieve high-quality results for the indirect illumination. We present a density estimation technique for efficiently computing all-frequency global illumination in diffuse and moderately glossy scenes. In particular, we compute the direct, indirect, and caustics illumination during photon tracing from the light sources. Since the high frequencies in the illumination often arise from visibility changes and surface normal variations, we consider a kernel that takes these factors into account. To efficiently detect visibility changes, we introduce a hierarchical voxel data structure of the scene geometry, which is generated on GPU. Further, we preserve the surface orientation by computing the density estimation in ray space.
{"title":"Lighting Details Preserving Photon Density Estimation","authors":"R. Herzog, H. Seidel","doi":"10.1109/PG.2007.57","DOIUrl":"https://doi.org/10.1109/PG.2007.57","url":null,"abstract":"Standard density estimation approaches suffer from visible bias due to low-pass filtering of the lighting function. Therefore, most photon density estimation methods have been used primarily with inefficient Monte Carlo final gathering to achieve high-quality results for the indirect illumination. We present a density estimation technique for efficiently computing all-frequency global illumination in diffuse and moderately glossy scenes. In particular, we compute the direct, indirect, and caustics illumination during photon tracing from the light sources. Since the high frequencies in the illumination often arise from visibility changes and surface normal variations, we consider a kernel that takes these factors into account. To efficiently detect visibility changes, we introduce a hierarchical voxel data structure of the scene geometry, which is generated on GPU. Further, we preserve the surface orientation by computing the density estimation in ray space.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116854082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive animation systems often use a level-of-detail (LOD) control to reduce the computational cost by eliminating unperceivable details of the scene. Most methods employ a multiresolutional representation of animation and geometrical data, and adaptively change the accuracy level according to the importance of each character. Multilinear analysis provides the efficient representation of multidimensional and multimodal data, including human motion data, based on statistical data correlations. This paper proposes a LOD control method of motion synthesis with a multilinear model. Our method first extracts a small number of principal components of motion samples by analyzing three-mode correlations among joints, time, and samples using high-order singular value decomposition. A new motion is synthesized by interpolating the reduced components using geostatistics, where the prediction accuracy of the resulting motion is controlled by adaptively decreasing the data dimensionality. We introduce a hybrid algorithm to optimize the reduction size and computational time according to the distance from the camera while maintaining visual quality. Our method provides a practical tool for creating an interactive animation of many characters while ensuring accurate and flexible controls at a modest level of computational cost.
{"title":"Multilinear Motion Synthesis with Level-of-Detail Controls","authors":"Tomohiko Mukai, Shigeru Kuriyama","doi":"10.1109/PG.2007.36","DOIUrl":"https://doi.org/10.1109/PG.2007.36","url":null,"abstract":"Interactive animation systems often use a level-of-detail (LOD) control to reduce the computational cost by eliminating unperceivable details of the scene. Most methods employ a multiresolutional representation of animation and geometrical data, and adaptively change the accuracy level according to the importance of each character. Multilinear analysis provides the efficient representation of multidimensional and multimodal data, including human motion data, based on statistical data correlations. This paper proposes a LOD control method of motion synthesis with a multilinear model. Our method first extracts a small number of principal components of motion samples by analyzing three-mode correlations among joints, time, and samples using high-order singular value decomposition. A new motion is synthesized by interpolating the reduced components using geostatistics, where the prediction accuracy of the resulting motion is controlled by adaptively decreasing the data dimensionality. We introduce a hybrid algorithm to optimize the reduction size and computational time according to the distance from the camera while maintaining visual quality. Our method provides a practical tool for creating an interactive animation of many characters while ensuring accurate and flexible controls at a modest level of computational cost.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121029010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animations of two avatars tangled with each other often appear in battle or fighting scenes in movies or games. However, creating such scenes is difficult due to the limitations of the tracking devices and the complex interactions of the avatars during such motions. In this paper, we propose a new method to generate animations of two persons tangled with each other based on individually captured motions. We use wrestling as an example. The inputs to the system are two individually captured motions and the topological relationship of the two avatars computed using Gauss Linking Integral (GLI). Then the system edits the captured motions so that they satisfy the given topological relationship. Using our method, it is possible to create / edit close-contact motions with minimum effort by the animators. The method can be used not only for wrestling, but also for any movement that requires the body to be tangled with others, such as holding a shoulder of an elderly to walk or a soldier piggy-backing another injured soldier.
{"title":"Wrestle Alone : Creating Tangled Motions of Multiple Avatars from Individually Captured Motions","authors":"Edmond S. L. Ho, T. Komura","doi":"10.1109/PG.2007.54","DOIUrl":"https://doi.org/10.1109/PG.2007.54","url":null,"abstract":"Animations of two avatars tangled with each other often appear in battle or fighting scenes in movies or games. However, creating such scenes is difficult due to the limitations of the tracking devices and the complex interactions of the avatars during such motions. In this paper, we propose a new method to generate animations of two persons tangled with each other based on individually captured motions. We use wrestling as an example. The inputs to the system are two individually captured motions and the topological relationship of the two avatars computed using Gauss Linking Integral (GLI). Then the system edits the captured motions so that they satisfy the given topological relationship. Using our method, it is possible to create / edit close-contact motions with minimum effort by the animators. The method can be used not only for wrestling, but also for any movement that requires the body to be tangled with others, such as holding a shoulder of an elderly to walk or a soldier piggy-backing another injured soldier.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131073970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katsutsugu Matsuyama, T. Fujimoto, Norishige Chiba
A technology for automatically creating and adding the sound to interactive CG animations of spark discharges in real time has been developed. In the procedure proposed in this paper, the user inputs the electric charge distribution, boundary conditions and other parameters affecting the initiation of electric discharges in virtual space. The animation of the discharge is then created by generating the shape of the discharge pattern and rendering it, and the sound synchronized with the animation is automatically generated in real time. The noises from spark discharges are shock waves, which exhibit complicated behavior; but, in this study, an empirical shape for a shock wave is employed to efficiently generate the acoustic waveform. Effective procedures for expressing lightning discharges and continuous discharges are also proposed.
{"title":"Real-time Sound Generation of Spark Discharge","authors":"Katsutsugu Matsuyama, T. Fujimoto, Norishige Chiba","doi":"10.1109/PG.2007.29","DOIUrl":"https://doi.org/10.1109/PG.2007.29","url":null,"abstract":"A technology for automatically creating and adding the sound to interactive CG animations of spark discharges in real time has been developed. In the procedure proposed in this paper, the user inputs the electric charge distribution, boundary conditions and other parameters affecting the initiation of electric discharges in virtual space. The animation of the discharge is then created by generating the shape of the discharge pattern and rendering it, and the sound synchronized with the animation is automatically generated in real time. The noises from spark discharges are shock waves, which exhibit complicated behavior; but, in this study, an empirical shape for a shock wave is employed to efficiently generate the acoustic waveform. Effective procedures for expressing lightning discharges and continuous discharges are also proposed.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132726762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter-Pike J. Sloan, N. Govindaraju, D. Nowrouzezahrai, John M. Snyder
We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.
{"title":"Image-Based Proxy Accumulation for Real-Time Soft Global Illumination","authors":"Peter-Pike J. Sloan, N. Govindaraju, D. Nowrouzezahrai, John M. Snyder","doi":"10.1109/PG.2007.28","DOIUrl":"https://doi.org/10.1109/PG.2007.28","url":null,"abstract":"We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128421012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a technique for image deformation in which the user is given flexible control over what kind of deformation to perform. Freeform image extends available image deformation techniques in that it provides a palette of intuitive tools including interactive object segmentation, stiffness editing and force-based controls to achieve both a natural look and realistic animations of deforming parts. The model underlying our approach is physics-based and it is amenable to a variety of different kinds of image manipulations ranging from as-rigid- as-possible to fully elastic deformations. We have developed a multigrid solver for quadrangular finite elements, which achieves real-time performance for high resolution pixel grids. On recent CPUs this solver can handle about 16K co-rotated finite elements at roughly 60 ms.
{"title":"Freeform Image","authors":"T. Schiwietz, J. Georgii, R. Westermann","doi":"10.1109/PG.2007.44","DOIUrl":"https://doi.org/10.1109/PG.2007.44","url":null,"abstract":"In this paper we present a technique for image deformation in which the user is given flexible control over what kind of deformation to perform. Freeform image extends available image deformation techniques in that it provides a palette of intuitive tools including interactive object segmentation, stiffness editing and force-based controls to achieve both a natural look and realistic animations of deforming parts. The model underlying our approach is physics-based and it is amenable to a variety of different kinds of image manipulations ranging from as-rigid- as-possible to fully elastic deformations. We have developed a multigrid solver for quadrangular finite elements, which achieves real-time performance for high resolution pixel grids. On recent CPUs this solver can handle about 16K co-rotated finite elements at roughly 60 ms.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129907314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a Delaunay based algorithm for simplifying vector field datasets. Our aim is to reduce the size of the mesh on which the vector field is defined while preserving topological features of the original vector field. We leverage a simple paradigm, vertex deletion in Delaunay triangulations, to achieve this goal. This technique is effective for two reasons. First, we guide deletions by a local error metric that bounds the change of the vectors at the affected simplices and maintains regions near critical points to prevent topological changes. Second, piecewise-linear interpolation over Delaunay triangulations is known to give good approximations of scalar fields. Since a vector field can be regarded as a collection of component scalar fields, a Delaunay triangulation can preserve each component and thus the structure of the vector field as a whole. We provide experimental evidence showing the effectiveness of our technique and its ability to preserve features of both two and three dimensional vector fields.
{"title":"A Delaunay Simplification Algorithm for Vector Fields","authors":"T. Dey, J. Levine, R. Wenger","doi":"10.1109/PG.2007.34","DOIUrl":"https://doi.org/10.1109/PG.2007.34","url":null,"abstract":"We present a Delaunay based algorithm for simplifying vector field datasets. Our aim is to reduce the size of the mesh on which the vector field is defined while preserving topological features of the original vector field. We leverage a simple paradigm, vertex deletion in Delaunay triangulations, to achieve this goal. This technique is effective for two reasons. First, we guide deletions by a local error metric that bounds the change of the vectors at the affected simplices and maintains regions near critical points to prevent topological changes. Second, piecewise-linear interpolation over Delaunay triangulations is known to give good approximations of scalar fields. Since a vector field can be regarded as a collection of component scalar fields, a Delaunay triangulation can preserve each component and thus the structure of the vector field as a whole. We provide experimental evidence showing the effectiveness of our technique and its ability to preserve features of both two and three dimensional vector fields.","PeriodicalId":376934,"journal":{"name":"15th Pacific Conference on Computer Graphics and Applications (PG'07)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133199859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}