Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks
We present algorithms that enable virtual objects to interact with and respond to virtual representations, avatars, of real objects. These techniques allow dynamic real objects, such as the user, tools, and parts, to be visually and physically incorporated into the virtual environment (VE). The system uses image-based object reconstruction and a volume query mechanism to detect collisions and to determine plausible collision responses between virtual objects and the avatars. This allows our system to provide the user natural interactions with the VE.We have begun a collaboration with NASA Langley Research Center to apply the hybrid environment system to a satellite payload assembly verification task. In an informal case study, NASA LaRC payload designers and engineers conducted common assembly tasks on payload models. The results suggest that hybrid environments could provide significant advantages for assembly verification and layout evaluation tasks.
{"title":"Incorporating dynamic real objects into immersive virtual environments","authors":"Benjamin C. Lok, Samir Naik, M. Whitton, F. Brooks","doi":"10.1145/1201775.882332","DOIUrl":"https://doi.org/10.1145/1201775.882332","url":null,"abstract":"We present algorithms that enable virtual objects to interact with and respond to virtual representations, avatars, of real objects. These techniques allow dynamic real objects, such as the user, tools, and parts, to be visually and physically incorporated into the virtual environment (VE). The system uses image-based object reconstruction and a volume query mechanism to detect collisions and to determine plausible collision responses between virtual objects and the avatars. This allows our system to provide the user natural interactions with the VE.We have begun a collaboration with NASA Langley Research Center to apply the hybrid environment system to a satellite payload assembly verification task. In an informal case study, NASA LaRC payload designers and engineers conducted common assembly tasks on payload models. The results suggest that hybrid environments could provide significant advantages for assembly verification and layout evaluation tasks.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129065773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xavier Décoret, F. Durand, F. Sillion, Julie Dorsey
We introduce billboard clouds -- a new approach for extreme simplification in the context of real-time rendering. 3D models are simplified onto a set of planes with texture and transparency maps. We present an optimization approach to build a billboard cloud given a geometric error threshold. After computing an appropriate density function in plane space, a greedy approach is used to select suitable representative planes. A good surface approximation is ensured by favoring planes that are "nearly tangent" to the model. This method does not require connectivity information, but instead avoids cracks by projecting primitives onto multiple planes when needed. For extreme simplification, our approach combines the strengths of mesh decimation and image-based impostors. We demonstrate our technique on a large class of models, including smooth manifolds and composite objects.
{"title":"Billboard clouds for extreme model simplification","authors":"Xavier Décoret, F. Durand, F. Sillion, Julie Dorsey","doi":"10.1145/1201775.882326","DOIUrl":"https://doi.org/10.1145/1201775.882326","url":null,"abstract":"We introduce billboard clouds -- a new approach for extreme simplification in the context of real-time rendering. 3D models are simplified onto a set of planes with texture and transparency maps. We present an optimization approach to build a billboard cloud given a geometric error threshold. After computing an appropriate density function in plane space, a greedy approach is used to select suitable representative planes. A good surface approximation is ensured by favoring planes that are \"nearly tangent\" to the model. This method does not require connectivity information, but instead avoids cracks by projecting primitives onto multiple planes when needed. For extreme simplification, our approach combines the strengths of mesh decimation and image-based impostors. We demonstrate our technique on a large class of models, including smooth manifolds and composite objects.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present sequential point trees, a data structure that allows adaptive rendering of point clouds completely on the graphics processor. Sequential point trees are based on a hierarchical point representation, but the hierarchical rendering traversal is replaced by sequential processing on the graphics processor, while the CPU is available for other tasks. Smooth transition to triangle rendering for optimized performance is integrated. We describe optimizations for backface culling and texture adaptive point selection. Finally, we discuss implementation issues and show results.
{"title":"Sequential point trees","authors":"C. Dachsbacher, C. Vogelgsang, M. Stamminger","doi":"10.1145/1201775.882321","DOIUrl":"https://doi.org/10.1145/1201775.882321","url":null,"abstract":"In this paper we present sequential point trees, a data structure that allows adaptive rendering of point clouds completely on the graphics processor. Sequential point trees are based on a hierarchical point representation, but the hierarchical rendering traversal is replaced by sequential processing on the graphics processor, while the CPU is available for other tasks. Smooth transition to triangle rendering for optimized performance is integrated. We describe optimizations for backface culling and texture adaptive point selection. Finally, we discuss implementation issues and show results.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Govindaraju, Brandon Lloyd, Sung-eui Yoon, Avneesh Sud, Dinesh Manocha
We present a new algorithm for interactive generation of hard-edged, umbral shadows in complex environments with a moving light source. Our algorithm uses a hybrid approach that combines the image quality of object-precision methods with the efficiencies of image-precision techniques. We present an algorithm for computing a compact potentially visible set (PVS) using levels-of-detail (LODs) and visibility culling. We use the PVSs computed from both the eye and the light in a novel cross-culling algorithm that identifies a reduced set of potential shadow-casters and shadow-receivers. Finally, we use a combination of shadow-polygons and shadow maps to generate shadows. We also present techniques for LOD-selection to minimize possible artifacts arising from the use of LODs. Our algorithm can generate sharp shadow edges and reduces the aliasing in pure shadow map approaches. We have implemented the algorithm on a three-PC system with NVIDIA GeForce 4 cards. We achieve 7--25 frames per second in three complex environments composed of millions of triangles.
{"title":"Interactive shadow generation in complex environments","authors":"N. Govindaraju, Brandon Lloyd, Sung-eui Yoon, Avneesh Sud, Dinesh Manocha","doi":"10.1145/1201775.882299","DOIUrl":"https://doi.org/10.1145/1201775.882299","url":null,"abstract":"We present a new algorithm for interactive generation of hard-edged, umbral shadows in complex environments with a moving light source. Our algorithm uses a hybrid approach that combines the image quality of object-precision methods with the efficiencies of image-precision techniques. We present an algorithm for computing a compact potentially visible set (PVS) using levels-of-detail (LODs) and visibility culling. We use the PVSs computed from both the eye and the light in a novel cross-culling algorithm that identifies a reduced set of potential shadow-casters and shadow-receivers. Finally, we use a combination of shadow-polygons and shadow maps to generate shadows. We also present techniques for LOD-selection to minimize possible artifacts arising from the use of LODs. Our algorithm can generate sharp shadow edges and reduces the aliasing in pure shadow map approaches. We have implemented the algorithm on a three-PC system with NVIDIA GeForce 4 cards. We achieve 7--25 frames per second in three complex environments composed of millions of triangles.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124603806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many computer graphics applications require high-intensity numerical simulation. We show that such computations can be performed efficiently on the GPU, which we regard as a full function streaming processor with high floating-point performance. We implemented two basic, broadly useful, computational kernels: a sparse matrix conjugate gradient solver and a regular-grid multigrid solver. Real time applications ranging from mesh smoothing and parameterization to fluid solvers and solid mechanics can greatly benefit from these, evidence our example applications of geometric flow and fluid simulation running on NVIDIA's GeForce FX.
{"title":"Sparse matrix solvers on the GPU: conjugate gradients and multigrid","authors":"J. Bolz, I. Farmer, E. Grinspun, P. Schröder","doi":"10.1145/1201775.882364","DOIUrl":"https://doi.org/10.1145/1201775.882364","url":null,"abstract":"Many computer graphics applications require high-intensity numerical simulation. We show that such computations can be performed efficiently on the GPU, which we regard as a full function streaming processor with high floating-point performance. We implemented two basic, broadly useful, computational kernels: a sparse matrix conjugate gradient solver and a regular-grid multigrid solver. Real time applications ranging from mesh smoothing and parameterization to fluid solvers and solid mechanics can greatly benefit from these, evidence our example applications of geometric flow and fluid simulation running on NVIDIA's GeForce FX.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116701284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Tsang, G. Fitzmaurice, G. Kurtenbach, Azam Khan, W. Buxton
We review the Boom Chameleon, a novel input/output device consisting of a flat-panel display mounted on a tracked mechanical armature. The display acts as a physical window into 3D virtual environments, through which a one-to-one mapping between real and virtual space is preserved. The Boom Chameleon is further augmented with a touch-screen and a microphone/speaker combination. We created a 3D annotation application that exploits this unique configuration in order to simultaneously capture viewpoint, voice and gesture information. Results of an informal user study show that the Boom Chameleon annotation facilities have the potential to be an effective, and intuitive system for reviewing 3D designs.
{"title":"Boom chameleon: simultaneous capture of 3D viewpoint, voice and gesture annotations on a spatially-aware display","authors":"M. Tsang, G. Fitzmaurice, G. Kurtenbach, Azam Khan, W. Buxton","doi":"10.1145/1201775.882329","DOIUrl":"https://doi.org/10.1145/1201775.882329","url":null,"abstract":"We review the Boom Chameleon, a novel input/output device consisting of a flat-panel display mounted on a tracked mechanical armature. The display acts as a physical window into 3D virtual environments, through which a one-to-one mapping between real and virtual space is preserved. The Boom Chameleon is further augmented with a touch-screen and a microphone/speaker combination. We created a 3D annotation application that exploits this unique configuration in order to simultaneously capture viewpoint, voice and gesture information. Results of an informal user study show that the Boom Chameleon annotation facilities have the potential to be an effective, and intuitive system for reviewing 3D designs.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132977110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In computer graphics, most research focuses on creating images. However, there has been much recent work on the automatic generation of sound linked to objects in motion and the relative positions of receivers and sound sources. This paper proposes a new method for creating one type of sound called aerodynamic sound. Examples of aerodynamic sound include sound generated by swinging swords or by wind blowing. A major source of aerodynamic sound is vortices generated in fluids such as air. First, we propose a method for creating sound textures for aerodynamic sound by making use of computational fluid dynamics. Next, we propose a method using the sound textures for real-time rendering of aerodynamic sound according to the motion of objects or wind velocity.
{"title":"Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics","authors":"Y. Dobashi, Tsuyoshi Yamamoto, T. Nishita","doi":"10.1145/1201775.882339","DOIUrl":"https://doi.org/10.1145/1201775.882339","url":null,"abstract":"In computer graphics, most research focuses on creating images. However, there has been much recent work on the automatic generation of sound linked to objects in motion and the relative positions of receivers and sound sources. This paper proposes a new method for creating one type of sound called aerodynamic sound. Examples of aerodynamic sound include sound generated by swinging swords or by wind blowing. A major source of aerodynamic sound is vortices generated in fluids such as air. First, we propose a method for creating sound textures for aerodynamic sound by making use of computational fluid dynamics. Next, we propose a method using the sound textures for real-time rendering of aerodynamic sound according to the motion of objects or wind velocity.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131208263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yung-Yu Chuang, Dan B. Goldman, B. Curless, D. Salesin, R. Szeliski
In this paper, we describe a method for extracting shadows from one natural scene and inserting them into another. We develop physically-based shadow matting and compositing equations and use these to pull a shadow matte from a source scene in which the shadow is cast onto an arbitrary planar background. We then acquire the photometric and geometric properties of the target scene by sweeping oriented linear shadows (cast by a straight object) across it. From these shadow scans, we can construct a shadow displacement map without requiring camera or light source calibration. This map can then be used to deform the original shadow matte. We demonstrate our approach for both indoor scenes with controlled lighting and for outdoor scenes using natural lighting.
{"title":"Shadow matting and compositing","authors":"Yung-Yu Chuang, Dan B. Goldman, B. Curless, D. Salesin, R. Szeliski","doi":"10.1145/1201775.882298","DOIUrl":"https://doi.org/10.1145/1201775.882298","url":null,"abstract":"In this paper, we describe a method for extracting shadows from one natural scene and inserting them into another. We develop physically-based shadow matting and compositing equations and use these to pull a shadow matte from a source scene in which the shadow is cast onto an arbitrary planar background. We then acquire the photometric and geometric properties of the target scene by sweeping oriented linear shadows (cast by a straight object) across it. From these shadow scans, we can construct a shadow displacement map without requiring camera or light source calibration. This map can then be used to deform the original shadow matte. We demonstrate our approach for both indoor scenes with controlled lighting and for outdoor scenes using natural lighting.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132985131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion.The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.
{"title":"Motion synthesis from annotations","authors":"Okan Arikan, D. Forsyth, J. F. O'Brien","doi":"10.1145/1201775.882284","DOIUrl":"https://doi.org/10.1145/1201775.882284","url":null,"abstract":"This paper describes a framework that allows a user to synthesize human motion while retaining control of its qualitative properties. The user paints a timeline with annotations --- like walk, run or jump --- from a vocabulary which is freely chosen by the user. The system then assembles frames from a motion database so that the final motion performs the specified actions at specified times. The motion can also be forced to pass through particular configurations at particular times, and to go to a particular position and orientation. Annotations can be painted positively (for example, must run), negatively (for example, may not run backwards) or as a don't-care. The system uses a novel search method, based around dynamic programming at several scales, to obtain a solution efficiently so that authoring is interactive. Our results demonstrate that the method can generate smooth, natural-looking motion.The annotation vocabulary can be chosen to fit the application, and allows specification of composite motions (run and jump simultaneously, for example). The process requires a collection of motion data that has been annotated with the chosen vocabulary. This paper also describes an effective tool, based around repeated use of support vector machines, that allows a user to annotate a large collection of motions quickly and easily so that they may be used with the synthesis algorithm.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115760885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid bodies with friction as shown in Figure 1.
{"title":"Nonconvex rigid bodies with stacking","authors":"Eran Guendelman, R. Bridson, Ronald Fedkiw","doi":"10.1145/1201775.882358","DOIUrl":"https://doi.org/10.1145/1201775.882358","url":null,"abstract":"We consider the simulation of nonconvex rigid bodies focusing on interactions such as collision, contact, friction (kinetic, static, rolling and spinning) and stacking. We advocate representing the geometry with both a triangulated surface and a signed distance function defined on a grid, and this dual representation is shown to have many advantages. We propose a novel approach to time integration merging it with the collision and contact processing algorithms in a fashion that obviates the need for ad hoc threshold velocities. We show that this approach matches the theoretical solution for blocks sliding and stopping on inclined planes with friction. We also present a new shock propagation algorithm that allows for efficient use of the propagation (as opposed to the simultaneous) method for treating contact. These new techniques are demonstrated on a variety of problems ranging from simple test cases to stacking problems with as many as 1000 nonconvex rigid bodies with friction as shown in Figure 1.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115239974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}