Parameterization of 3D mesh data is important for many graphics applications, in particular for texture mapping, remeshing and morphing. Closed manifold genus-0 meshes are topologically equivalent to a sphere, hence this is the natural parameter domain for them. Parameterizing a triangle mesh onto the sphere means assigning a 3D position on the unit sphere to each of the mesh vertices, such that the spherical triangles induced by the mesh connectivity are not too distorted and do not overlap. Satisfying the non-overlapping requirement is the most difficult and critical component of this process. We describe a generalization of the method of barycentric coordinates for planar parameterization which solves the spherical parameterization problem, prove its correctness by establishing a connection to spectral graph theory and show how to compute these parameterizations.
{"title":"Fundamentals of spherical parameterization for 3D meshes","authors":"C. Gotsman, X. Gu, A. Sheffer","doi":"10.1145/1201775.882276","DOIUrl":"https://doi.org/10.1145/1201775.882276","url":null,"abstract":"Parameterization of 3D mesh data is important for many graphics applications, in particular for texture mapping, remeshing and morphing. Closed manifold genus-0 meshes are topologically equivalent to a sphere, hence this is the natural parameter domain for them. Parameterizing a triangle mesh onto the sphere means assigning a 3D position on the unit sphere to each of the mesh vertices, such that the spherical triangles induced by the mesh connectivity are not too distorted and do not overlap. Satisfying the non-overlapping requirement is the most difficult and critical component of this process. We describe a generalization of the method of barycentric coordinates for planar parameterization which solves the spherical parameterization problem, prove its correctness by establishing a connection to spectral graph theory and show how to compute these parameterizations.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"12 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120916577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a method for animating suspended particle explosions. Rather than modeling the numerically troublesome, and largely invisible blast wave, the method uses a relatively stable incompressible fluid model to account for the motion of air and hot gases. The fluid's divergence field is adjusted directly to account for detonations and the generation and expansion of gaseous combustion products. Particles immersed in the fluid track the motion of particulate fuel and soot as they are advected by the fluid. Combustion is modeled using a simple but effective process governed by the particle and fluid systems. The method has enough flexibility to also approximate sprays of burning liquids. This paper includes several demonstrative examples showing air bursts, explosions near obstacles, confined explosions, and burning sprays. Because the method is based on components that allow large time integration steps, it only requires a few seconds of computation per frame for the examples shown.
{"title":"Animating suspended particle explosions","authors":"Bryan E. Feldman, J. F. O'Brien, Okan Arikan","doi":"10.1145/1201775.882336","DOIUrl":"https://doi.org/10.1145/1201775.882336","url":null,"abstract":"This paper describes a method for animating suspended particle explosions. Rather than modeling the numerically troublesome, and largely invisible blast wave, the method uses a relatively stable incompressible fluid model to account for the motion of air and hot gases. The fluid's divergence field is adjusted directly to account for detonations and the generation and expansion of gaseous combustion products. Particles immersed in the fluid track the motion of particulate fuel and soot as they are advected by the fluid. Combustion is modeled using a simple but effective process governed by the particle and fluid systems. The method has enough flexibility to also approximate sprays of burning liquids. This paper includes several demonstrative examples showing air bursts, explosions near obstacles, confined explosions, and burning sprays. Because the method is based on components that allow large time integration steps, it only requires a few seconds of computation per frame for the examples shown.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121291940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most previous soft shadow algorithms have either suffered from aliasing, been too slow, or could only use a limited set of shadow casters and/or receivers. Therefore, we present a strengthened soft shadow volume algorithm that deals with these problems. Our critical improvements include robust penumbra wedge construction, geometry-based visibility computation, and also simplified computation through a four-dimensional texture lookup. This enables us to implement the algorithm using programmable graphics hardware, and it results in images that most often are indistinguishable from images created as the average of 1024 hard shadow images. Furthermore, our algorithm can use both arbitrary shadow casters and receivers. Also, one version of our algorithm completely avoids sampling artifacts which is rare for soft shadow algorithms. As a bonus, the four-dimensional texture lookup allows for small textured light sources, and, even video textures can be used as light sources. Our algorithm has been implemented in pure software, and also using the GeForce FX emulator with pixel shaders. Our software implementation renders soft shadows at 0.5--5 frames per second for the images in this paper. With actual hardware, we expect that our algorithm will render soft shadows in real time. An important performance measure is bandwidth usage. For the same image quality, an algorithm using the accumulated hard shadow images uses almost two orders of magnitude more bandwidth than our algorithm.
{"title":"A geometry-based soft shadow volume algorithm using graphics hardware","authors":"Ulf Assarsson, T. Akenine-Möller","doi":"10.1145/1201775.882300","DOIUrl":"https://doi.org/10.1145/1201775.882300","url":null,"abstract":"Most previous soft shadow algorithms have either suffered from aliasing, been too slow, or could only use a limited set of shadow casters and/or receivers. Therefore, we present a strengthened soft shadow volume algorithm that deals with these problems. Our critical improvements include robust penumbra wedge construction, geometry-based visibility computation, and also simplified computation through a four-dimensional texture lookup. This enables us to implement the algorithm using programmable graphics hardware, and it results in images that most often are indistinguishable from images created as the average of 1024 hard shadow images. Furthermore, our algorithm can use both arbitrary shadow casters and receivers. Also, one version of our algorithm completely avoids sampling artifacts which is rare for soft shadow algorithms. As a bonus, the four-dimensional texture lookup allows for small textured light sources, and, even video textures can be used as light sources. Our algorithm has been implemented in pure software, and also using the GeForce FX emulator with pixel shaders. Our software implementation renders soft shadows at 0.5--5 frames per second for the images in this paper. With actual hardware, we expect that our algorithm will render soft shadows in real time. An important performance measure is bandwidth usage. For the same image quality, an algorithm using the accumulated hard shadow images uses almost two orders of magnitude more bandwidth than our algorithm.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123635266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The so-constructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.
{"title":"Dual domain extrapolation","authors":"B. Lévy","doi":"10.1145/1201775.882277","DOIUrl":"https://doi.org/10.1145/1201775.882277","url":null,"abstract":"Shape optimization and surface fairing for polygon meshes have been active research areas for the last few years. Existing approaches either require the border of the surface to be fixed, or are only applicable to closed surfaces. In this paper, we propose a new approach, that computes natural boundaries. This makes it possible not only to smooth an existing geometry, but also to extrapolate its shape beyond the existing border. Our approach is based on a global parameterization of the surface and on a minimization of the squared curvatures, discretized on the edges of the surface. The so-constructed surface is an approximation of a minimal energy surface (MES). Using a global parameterization makes it possible to completely decouple the outer fairness (surface smoothness) from the inner fairness (mesh quality). In addition, the parameter space provides the user with a new means of controlling the shape of the surface. When used as a geometry filter, our approach computes a smoothed mesh that is discrete conformal to the original one. This allows smoothing textured meshes without introducing distortions.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124651217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic meshes into a streamable, highly compressed representation. During decompression only a small portion of the mesh needs to be kept in memory at any time. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental in-core processing on gigantic meshes. Decompression speeds are CPU-limited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor.A novel external memory data structure provides our compression engine with transparent access to arbitrary large meshes. This out-of-core mesh was designed to accommodate the access pattern of our region-growing based compressor, which - in return - performs mesh queries as seldom and as local as possible by remembering previous queries as long as needed and by adapting its traversal slightly. The achieved compression rates are state-of-the-art.
{"title":"Out-of-core compression for gigantic polygon meshes","authors":"M. Isenburg, S. Gumhold","doi":"10.1145/1201775.882366","DOIUrl":"https://doi.org/10.1145/1201775.882366","url":null,"abstract":"Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic meshes into a streamable, highly compressed representation. During decompression only a small portion of the mesh needs to be kept in memory at any time. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental in-core processing on gigantic meshes. Decompression speeds are CPU-limited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor.A novel external memory data structure provides our compression engine with transparent access to arbitrary large meshes. This out-of-core mesh was designed to accommodate the access pattern of our region-growing based compressor, which - in return - performs mesh queries as seldom and as local as possible by remembering previous queries as long as needed and by adapting its traversal slightly. The achieved compression rates are state-of-the-art.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"297 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127560774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we introduce a method to simulate fluid flows on smooth surfaces of arbitrary topology: an effect never seen before. We achieve this by combining a two-dimensional stable fluid solver with an atlas of parametrizations of a Catmull-Clark surface. The contributions of this paper are: (i) an extension of the Stable Fluids solver to arbitrary curvilinear coordinates, (ii) an elegant method to handle cross-patch boundary conditions and (iii) a set of new external forces custom tailored for surface flows. Our techniques can also be generalized to handle other types of processes on surfaces modeled by partial differential equations, such as reaction-diffusion. Some of our simulations allow a user to interactively place densities and apply forces to the surface, then watch their effects in real-time. We have also computed higher resolution animations of surface flows off-line.
{"title":"Flows on surfaces of arbitrary topology","authors":"J. Stam","doi":"10.1145/1201775.882338","DOIUrl":"https://doi.org/10.1145/1201775.882338","url":null,"abstract":"In this paper we introduce a method to simulate fluid flows on smooth surfaces of arbitrary topology: an effect never seen before. We achieve this by combining a two-dimensional stable fluid solver with an atlas of parametrizations of a Catmull-Clark surface. The contributions of this paper are: (i) an extension of the Stable Fluids solver to arbitrary curvilinear coordinates, (ii) an elegant method to handle cross-patch boundary conditions and (iii) a set of new external forces custom tailored for surface flows. Our techniques can also be generalized to handle other types of processes on surfaces modeled by partial differential equations, such as reaction-diffusion. Some of our simulations allow a user to interactively place densities and apply forces to the surface, then watch their effects in real-time. We have also computed higher resolution animations of surface flows off-line.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125457884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an approach for precomputing data-driven models of interactive physically based deformable scenes. The method permits real-time hardware synthesis of nonlinear deformation dynamics, including self-contact and global illumination effects, and supports real-time user interaction. We use data-driven tabulation of the system's deterministic state space dynamics, and model reduction to build efficient low-rank parameterizations of the deformed shapes. To support runtime interaction, we also tabulate impulse response functions for a palette of external excitations. Although our approach simulates particular systems under very particular interaction conditions, it has several advantages. First, parameterizing all possible scene deformations enables us to precompute novel reduced coparameterizations of global scene illumination for low-frequency lighting conditions. Second, because the deformation dynamics are precomputed and parameterized as a whole, collisions are resolved within the scene during precomputation so that runtime self-collision handling is implicit. Optionally, the data-driven models can be synthesized on programmable graphics hardware, leaving only the low-dimensional state space dynamics and appearance data models to be computed by the main CPU.
{"title":"Precomputing interactive dynamic deformable scenes","authors":"Doug L. James, K. Fatahalian","doi":"10.1145/1201775.882359","DOIUrl":"https://doi.org/10.1145/1201775.882359","url":null,"abstract":"We present an approach for precomputing data-driven models of interactive physically based deformable scenes. The method permits real-time hardware synthesis of nonlinear deformation dynamics, including self-contact and global illumination effects, and supports real-time user interaction. We use data-driven tabulation of the system's deterministic state space dynamics, and model reduction to build efficient low-rank parameterizations of the deformed shapes. To support runtime interaction, we also tabulate impulse response functions for a palette of external excitations. Although our approach simulates particular systems under very particular interaction conditions, it has several advantages. First, parameterizing all possible scene deformations enables us to precompute novel reduced coparameterizations of global scene illumination for low-frequency lighting conditions. Second, because the deformation dynamics are precomputed and parameterized as a whole, collisions are resolved within the scene during precomputation so that runtime self-collision handling is implicit. Optionally, the data-driven models can be synthesized on programmable graphics hardware, leaving only the low-dimensional state space dynamics and appearance data models to be computed by the main CPU.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114534606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In free-viewpoint video, the viewer can interactively choose his viewpoint in 3-D space to observe the action of a dynamic real-world scene from arbitrary perspectives. The human body and its motion plays a central role in most visual media and its structure can be exploited for robust motion estimation and efficient visualization. This paper describes a system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint.The actor's silhouettes are extracted from synchronized video frames via background segmentation and then used to determine a sequence of poses for a 3D human body model. By employing multi-view texturing during rendering, time-dependent changes in the body surface are reproduced in high detail. The motion capture subsystem runs offline, is non-intrusive, yields robust motion parameter estimates, and can cope with a broad range of motion. The rendering subsystem runs at real-time frame rates using ubiquous graphics hardware, yielding a highly naturalistic impression of the actor. The actor can be placed in virtual environments to create composite dynamic scenes. Free-viewpoint video allows the creation of camera fly-throughs or viewing the action interactively from arbitrary perspectives.
{"title":"Free-viewpoint video of human actors","authors":"J. Carranza, C. Theobalt, M. Magnor, H. Seidel","doi":"10.1145/1201775.882309","DOIUrl":"https://doi.org/10.1145/1201775.882309","url":null,"abstract":"In free-viewpoint video, the viewer can interactively choose his viewpoint in 3-D space to observe the action of a dynamic real-world scene from arbitrary perspectives. The human body and its motion plays a central role in most visual media and its structure can be exploited for robust motion estimation and efficient visualization. This paper describes a system that uses multi-view synchronized video footage of an actor's performance to estimate motion parameters and to interactively re-render the actor's appearance from any viewpoint.The actor's silhouettes are extracted from synchronized video frames via background segmentation and then used to determine a sequence of poses for a 3D human body model. By employing multi-view texturing during rendering, time-dependent changes in the body surface are reproduced in high detail. The motion capture subsystem runs offline, is non-intrusive, yields robust motion parameter estimates, and can cope with a broad range of motion. The rendering subsystem runs at real-time frame rates using ubiquous graphics hardware, yielding a highly naturalistic impression of the actor. The actor can be placed in virtual environments to create composite dynamic scenes. Free-viewpoint video allows the creation of camera fly-throughs or viewing the action interactively from arbitrary perspectives.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Alliez, D. Cohen-Steiner, O. Devillers, B. Lévy, M. Desbrun
In this paper, we propose a novel polygonal remeshing technique that exploits a key aspect of surfaces: the intrinsic anisotropy of natural or man-made geometry. In particular, we use curvature directions to drive the remeshing process, mimicking the lines that artists themselves would use when creating 3D models from scratch. After extracting and smoothing the curvature tensor field of an input genus-0 surface patch, lines of minimum and maximum curvatures are used to determine appropriate edges for the remeshed version in anisotropic regions, while spherical regions are simply point sampled since there is no natural direction of symmetry locally. As a result our technique generates polygon meshes mainly composed of quads in anisotropic regions, and of triangles in spherical regions. Our approach provides the flexibility to produce meshes ranging from isotropic to anisotropic, from coarse to dense, and from uniform to curvature adapted.
{"title":"Anisotropic polygonal remeshing","authors":"P. Alliez, D. Cohen-Steiner, O. Devillers, B. Lévy, M. Desbrun","doi":"10.1145/1201775.882296","DOIUrl":"https://doi.org/10.1145/1201775.882296","url":null,"abstract":"In this paper, we propose a novel polygonal remeshing technique that exploits a key aspect of surfaces: the intrinsic anisotropy of natural or man-made geometry. In particular, we use curvature directions to drive the remeshing process, mimicking the lines that artists themselves would use when creating 3D models from scratch. After extracting and smoothing the curvature tensor field of an input genus-0 surface patch, lines of minimum and maximum curvatures are used to determine appropriate edges for the remeshed version in anisotropic regions, while spherical regions are simply point sampled since there is no natural direction of symmetry locally. As a result our technique generates polygon meshes mainly composed of quads in anisotropic regions, and of triangles in spherical regions. Our approach provides the flexibility to produce meshes ranging from isotropic to anisotropic, from coarse to dense, and from uniform to curvature adapted.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127964993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}