C. Greuel, Patrice Caire, J. Cirincione, Perry Hoberman, Michael Scroggins
Christian Greuel We hear the talk of endless technological revolutions. We are surrounded by high-tech gadgetry that does our bidding. Yet what does all of this magnificent machinery really offer us? Does progress in fact exist? And if so, what is it actually worth without substantial content? This discussion panel is addressing the current state of aesthetics in the virtual environment by focusing on the roles that tools have played in artistic communities of the past and how virtual technologies will undoubtedly affect their future. The beginning of history shows human beings using naturally-made pigments to draw images on cave walls, allowing them to represent their experiences to others. Through tomorrow's technology, we may find ourselves projecting our very thoughts into the space around us in order to do exactly the same. The purpose of the aesthetic action has and always will be to visualize ideas and to explore our environments using whatever devices are available. Today we have increasingly powerful instruments, such as personal computer workstations, stereoscopic video displays and interactive software, to present artificially fabricated environments, popularly known as Virtual Reality. The technological elements are in place and we have begun our investigation into the latest and greatest form of artistic communication. Virtual Reality promises artists the most exciting breakthrough for the creative process since the invention of motion pictures. Now at the dawn of an era of virtual arts, the first generations of tools wait patiently to tell us something that we don't already know. But what message do they bring? Is there any passion here? High-end technology is not an end in itself. It merely represents the latest in a long list of tools that can be used for human expression. We have not come this far just to do cool computer tricks or sell vacant office space. There has been an unfortunate lack of artistic activity in cyberspace. We must focus on this cultural deficit and breathe life into the cold silicon void that we have created. By considering the tools of Virtual Reality in a historical context of art and technology as they relate to the fabrication of simulated experience, this panel of active artists intends to provoke constructive thought amongst the virtual arts community, promote active exploration of experience as an art form and unlock doors to possible roads for our artistic travels throughout this age of cybernetics.
{"title":"Aesthetics & tools in the virtual environment (panel session)","authors":"C. Greuel, Patrice Caire, J. Cirincione, Perry Hoberman, Michael Scroggins","doi":"10.1145/218380.218526","DOIUrl":"https://doi.org/10.1145/218380.218526","url":null,"abstract":"Christian Greuel We hear the talk of endless technological revolutions. We are surrounded by high-tech gadgetry that does our bidding. Yet what does all of this magnificent machinery really offer us? Does progress in fact exist? And if so, what is it actually worth without substantial content? This discussion panel is addressing the current state of aesthetics in the virtual environment by focusing on the roles that tools have played in artistic communities of the past and how virtual technologies will undoubtedly affect their future. The beginning of history shows human beings using naturally-made pigments to draw images on cave walls, allowing them to represent their experiences to others. Through tomorrow's technology, we may find ourselves projecting our very thoughts into the space around us in order to do exactly the same. The purpose of the aesthetic action has and always will be to visualize ideas and to explore our environments using whatever devices are available. Today we have increasingly powerful instruments, such as personal computer workstations, stereoscopic video displays and interactive software, to present artificially fabricated environments, popularly known as Virtual Reality. The technological elements are in place and we have begun our investigation into the latest and greatest form of artistic communication. Virtual Reality promises artists the most exciting breakthrough for the creative process since the invention of motion pictures. Now at the dawn of an era of virtual arts, the first generations of tools wait patiently to tell us something that we don't already know. But what message do they bring? Is there any passion here? High-end technology is not an end in itself. It merely represents the latest in a long list of tools that can be used for human expression. We have not come this far just to do cool computer tricks or sell vacant office space. There has been an unfortunate lack of artistic activity in cyberspace. We must focus on this cultural deficit and breathe life into the cold silicon void that we have created. By considering the tools of Virtual Reality in a historical context of art and technology as they relate to the fabrication of simulated experience, this panel of active artists intends to provoke constructive thought amongst the virtual arts community, promote active exploration of experience as an art form and unlock doors to possible roads for our artistic travels throughout this age of cybernetics.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122750900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm for generating uniformly distributed random samples from arbitrary spherical triangles. The algorithm is based on a transformation of the unit square and easily accommodates stratified sampling, an effective means of reducing variance. With the new algorithm it is straightforward to perform stratified sampling of the solid angle subtended by an arbitrary polygon; this is a fundamental operation in image synthesis which has not been addressed in the Monte Carlo literature. We derive the required transformation using elementary spherical trigonometry and provide the complete sampling algorithm. CR
{"title":"Stratified sampling of spherical triangles","authors":"J. Arvo","doi":"10.1145/218380.218500","DOIUrl":"https://doi.org/10.1145/218380.218500","url":null,"abstract":"We present an algorithm for generating uniformly distributed random samples from arbitrary spherical triangles. The algorithm is based on a transformation of the unit square and easily accommodates stratified sampling, an effective means of reducing variance. With the new algorithm it is straightforward to perform stratified sampling of the solid angle subtended by an arbitrary polygon; this is a fundamental operation in image synthesis which has not been addressed in the Monte Carlo literature. We derive the required transformation using elementary spherical trigonometry and provide the complete sampling algorithm. CR","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124945452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces the concept of Geometry Compression, lowing 3D triangle data to be represented with a factor of 6 to times fewer bits than conventional techniques, with only slight los es in object quality. The technique is amenable to rapid decomp sion in both software and hardware implementations; if 3D rend ing hardware contains a geometry decompression unit, applicat geometry can be stored in memory in compressed format. Geo try is first represented as a generalized triangle mesh, a data s ture that allows each instance of a vertex in a linear stream to sp ify an average of two triangles. Then a variable length compress is applied to individual positions, colors, and normals. Delta com pression followed by a modified Huffman compression is used f positions and colors; a novel table-based approach is used for mals. The table allows any useful normal to be represented by 18-bit index, many normals can be represented with index deltas 8 bits or less. Geometry compression is a general space-time tr off, and offers advantages at every level of the memory/interco nect hierarchy: less storage space is needed on disk, less trans sion time is needed on networks.
{"title":"Geometry compression","authors":"M. Deering","doi":"10.1145/218380.218391","DOIUrl":"https://doi.org/10.1145/218380.218391","url":null,"abstract":"This paper introduces the concept of Geometry Compression, lowing 3D triangle data to be represented with a factor of 6 to times fewer bits than conventional techniques, with only slight los es in object quality. The technique is amenable to rapid decomp sion in both software and hardware implementations; if 3D rend ing hardware contains a geometry decompression unit, applicat geometry can be stored in memory in compressed format. Geo try is first represented as a generalized triangle mesh, a data s ture that allows each instance of a vertex in a linear stream to sp ify an average of two triangles. Then a variable length compress is applied to individual positions, colors, and normals. Delta com pression followed by a modified Huffman compression is used f positions and colors; a novel table-based approach is used for mals. The table allows any useful normal to be represented by 18-bit index, many normals can be represented with index deltas 8 bits or less. Geometry compression is a general space-time tr off, and offers advantages at every level of the memory/interco nect hierarchy: less storage space is needed on disk, less trans sion time is needed on networks.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131111802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascal Volino, M. Courchesne, N. Magnenat-Thalmann
We are presenting techniques for simulating the motion and the deformation of cloth, fabrics or, more generally, deformable surfaces. Our main goal is to be able to simulate any kind of surface without imposing restrictions on shape or geometrical environment. In particular, we are considering difficult situations with respect to deformations and collisions, like wrinkled fabric falling on the ground. Thus, we have enhanced existing algorithms in order to cope with any possible situation. A mechanical model has been implemented to deal with any irregular triangular meshes, handle high deformations despite rough discretisation, and cope with complex interacting collisions. Thus, it should deal efficiently with situations where nonlinearities and discontinuities are really non marginal. Collision detection has also been improved to efficiently detect self-collisions, and also to correctly consider collision orientations despite the lack of surface orientation information from preset geometrical contexts, using consistency checking and correction. We illustrate these features through simulation examples.
{"title":"Versatile and efficient techniques for simulating cloth and other deformable objects","authors":"Pascal Volino, M. Courchesne, N. Magnenat-Thalmann","doi":"10.1145/218380.218432","DOIUrl":"https://doi.org/10.1145/218380.218432","url":null,"abstract":"We are presenting techniques for simulating the motion and the deformation of cloth, fabrics or, more generally, deformable surfaces. Our main goal is to be able to simulate any kind of surface without imposing restrictions on shape or geometrical environment. In particular, we are considering difficult situations with respect to deformations and collisions, like wrinkled fabric falling on the ground. Thus, we have enhanced existing algorithms in order to cope with any possible situation. A mechanical model has been implemented to deal with any irregular triangular meshes, handle high deformations despite rough discretisation, and cope with complex interacting collisions. Thus, it should deal efficiently with situations where nonlinearities and discontinuities are really non marginal. Collision detection has also been improved to efficiently detect self-collisions, and also to correctly consider collision orientations despite the lack of surface orientation information from preset geometrical contexts, using consistency checking and correction. We illustrate these features through simulation examples.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132302402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in computer graphics have produced images approaching the elusive goal of photorealism. Since many natural objects are so complex and detailed, they are often not rendered with convincing fidelity due to the difficulties in succinctly defining and efficiently rendering their geometry. With the increased demand of future simulation and virtual reality applications, the production of realistic natural-looking background objects will become increasingly more important. We present a model to create and render trees. Our emphasis is on the overall geometrical structure of the tree and not a strict adherence to botanical principles. Since the model must be utilized by general users, it does not require any knowledge beyond the principles of basic geometry. We also explain a method to seamlessly degrade the tree geometry at long ranges to optimize the drawing of large quantities of trees in forested areas.
{"title":"Creation and rendering of realistic trees","authors":"Jason P. Weber, J. Penn","doi":"10.1145/218380.218427","DOIUrl":"https://doi.org/10.1145/218380.218427","url":null,"abstract":"Recent advances in computer graphics have produced images approaching the elusive goal of photorealism. Since many natural objects are so complex and detailed, they are often not rendered with convincing fidelity due to the difficulties in succinctly defining and efficiently rendering their geometry. With the increased demand of future simulation and virtual reality applications, the production of realistic natural-looking background objects will become increasingly more important. We present a model to create and render trees. Our emphasis is on the overall geometrical structure of the tree and not a strict adherence to botanical principles. Since the model must be utilized by general users, it does not require any knowledge beyond the principles of basic geometry. We also explain a method to seamlessly degrade the tree geometry at long ranges to optimize the drawing of large quantities of trees in forested areas.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133733988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Developing a visually convincing model of fire, smoke, and other gaseous phenomenais among the most difficult and attractive problems in computer graphics. We have created new methods of animating a wide range of gaseous phenomena, including the particularly subtle problem of modelling “wispy” smoke and steam, using far fewer primitives than before. One significant innovation is the reformulation and solution of the advection-diffusion equation for densities composed of “warped blobs”. These blobs more accurately model the distortions that gases undergo when advected by wind fields. We also introduce a simple model for the flame of a fire and its spread. Lastly, we present an efficient formulation and implementation of global illumination in the presence of gases and fire. Our models are specifically designed to permit a significant degree of user control over the evolution of gaseous phenomena.
{"title":"Depicting fire and other gaseous phenomena using diffusion processes","authors":"J. Stam, E. Fiume","doi":"10.1145/218380.218430","DOIUrl":"https://doi.org/10.1145/218380.218430","url":null,"abstract":"Developing a visually convincing model of fire, smoke, and other gaseous phenomenais among the most difficult and attractive problems in computer graphics. We have created new methods of animating a wide range of gaseous phenomena, including the particularly subtle problem of modelling “wispy” smoke and steam, using far fewer primitives than before. One significant innovation is the reformulation and solution of the advection-diffusion equation for densities composed of “warped blobs”. These blobs more accurately model the distortions that gases undergo when advected by wind fields. We also introduce a simple model for the flame of a fire and its spread. Lastly, we present an efficient formulation and implementation of global illumination in the presence of gases and fire. Our models are specifically designed to permit a significant degree of user control over the evolution of gaseous phenomena.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133250983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physically-based modeling has been used in the past to support a variety of interactive modeling tasks including free-form surface design, mechanism design, constrained drawing, and interactive camera control. In these systems, the user interacts with the model by exerting virtual forces, to which the system responds subject to the active constraints. In the past, this kind of interaction has been applicable only to models that are governed by continuous parameters. In this paper we present an extension to mixed continuous/discrete models, emphasizing constrained layout problems that arise in architecture and other domains. When the object being dragged is blocked from further motion by geometric constraints, a local discrete search is triggered, during which transformations such as swapping of adjacent objects may be performed. The result of the search is a “nearby” state in which the target object has been moved in the indicated direction and in which all constraints are satisfied. The transition to this state is portrayed using simple but effective animated visual effects. Following the transition, continuous dragging is resumed. The resulting seamless transitions between discrete and continuous manipulation allow the user to easily explore the mixed design space just by dragging objects. We demonstrate the method in application to architectural floor plan design, circuit board layout, art analysis, and page layout.
{"title":"Interactive physically-based manipulation of discrete/continuous models","authors":"Mika Harada, A. Witkin, D. Baraff","doi":"10.1145/218380.218443","DOIUrl":"https://doi.org/10.1145/218380.218443","url":null,"abstract":"Physically-based modeling has been used in the past to support a variety of interactive modeling tasks including free-form surface design, mechanism design, constrained drawing, and interactive camera control. In these systems, the user interacts with the model by exerting virtual forces, to which the system responds subject to the active constraints. In the past, this kind of interaction has been applicable only to models that are governed by continuous parameters. In this paper we present an extension to mixed continuous/discrete models, emphasizing constrained layout problems that arise in architecture and other domains. When the object being dragged is blocked from further motion by geometric constraints, a local discrete search is triggered, during which transformations such as swapping of adjacent objects may be performed. The result of the search is a “nearby” state in which the target object has been moved in the indicated direction and in which all constraints are satisfied. The transition to this state is portrayed using simple but effective animated visual effects. Following the transition, continuous dragging is resumed. The resulting seamless transitions between discrete and continuous manipulation allow the user to easily explore the mixed design space just by dragging objects. We demonstrate the method in application to architectural floor plan design, circuit board layout, art analysis, and page layout.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121247084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A major unsolved problem in computer graphics is the construction and animation of realistic human facial models. Traditionally, facial models have been built painstakingly by manual digitization and animated by ad hoc parametrically controlled facial mesh deformations or kinematic approximation of muscle actions. Fortunately, animators are now able to digitize facial geometries through the use of scanning range sensors and animate them through the dynamic simulation of facial tissues and muscles. However, these techniques require considerable user input to construct facial models of individuals suitable for animation. In this paper, we present a methodology for automating this challenging task. Starting with a structured facial mesh, we develop algorithms that automatically construct functional models of the heads of human subjects from laser-scanned range and reflectance data. These algorithms automatically insert contractile muscles at anatomically correct positions within a dynamic skin model and root them in an estimated skull structure with a hinged jaw. They also synthesize functional eyes, eyelids, teeth, and a neck and fit them to the final model. The constructed face may be animated via muscle actuations. In this way, we create the most authentic and functional facial models of individuals available to date and demonstrate their use in facial animation.
{"title":"Realistic modeling for facial animation","authors":"Yuencheng Lee, Demetri Terzopoulos, K. Waters","doi":"10.1145/218380.218407","DOIUrl":"https://doi.org/10.1145/218380.218407","url":null,"abstract":"A major unsolved problem in computer graphics is the construction and animation of realistic human facial models. Traditionally, facial models have been built painstakingly by manual digitization and animated by ad hoc parametrically controlled facial mesh deformations or kinematic approximation of muscle actions. Fortunately, animators are now able to digitize facial geometries through the use of scanning range sensors and animate them through the dynamic simulation of facial tissues and muscles. However, these techniques require considerable user input to construct facial models of individuals suitable for animation. In this paper, we present a methodology for automating this challenging task. Starting with a structured facial mesh, we develop algorithms that automatically construct functional models of the heads of human subjects from laser-scanned range and reflectance data. These algorithms automatically insert contractile muscles at anatomically correct positions within a dynamic skin model and root them in an estimated skull structure with a hinged jaw. They also synthesize functional eyes, eyelids, teeth, and a neck and fit them to the final model. The constructed face may be animated via muscle actuations. In this way, we create the most authentic and functional facial models of individuals available to date and demonstrate their use in facial animation.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124194478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Veeder, Mark Stephen Pierce, E. Jarvis, J. N. Latta, Heidi Therese Dangelmaier, Jez San
Description While innovative (but secretive) early on, the videogame industry is now rejoining the computer graphics mainstream. In production, we see a rapid move from 2D to 3D animation, lowend to high-end production technologies, limited in-house tools to cutting-edge animation production techniques such as motion capture and 3D character animation. In game formats, we see experimentation with multi-player games, cooperative strategies, and virtual reality. Interactive entertainment overall is a rapidly expanding area with a great requirement for creative intervention and sophisticated computer graphics. The videogame industry has only very recently come into focus for many people in the computer graphics field, yet this industry is now driving much of the technology development in computer animation. Videogame development is being drawn deeper into the media mainstream. We have entered the age of the "commercial transmedia supersystem" where entertainment content is proliferated across multiple marketing opportunities: the game, the movie, the music CD, the book, the doll. Application developers have recently focused on an "author once, deploy many" imperative for cost effective production. As a new generation tackles the problem of interactive content production, their tools apply contemporary technical solutions to a process done with graph paper and assembler code not so many years ago. Videogame content may evolve as well, driven by the new delivery systems which underly market growth. For example, the corporate dreams of interactive television list the two largest consumer revenue areas as shopping and (then) games. Ubiquitous interactive television would certainly leverage today's limited multi-user games. New audiences means designing for new cognitive models of fun and taking advantage of recent research in how media products relate to gender and childhood development. Electronic gaming could evolve to encompass nationwide social events such as elections, celebrity trials, virtual participation in natural disasters, and so forth. All these new products, applications, and markets require technical, design, and artistic contributions for development, yet our skills, knowledge sets, and innovation must be translated into the new forms. To make this translation we must develop a coherent picture of how this industry is currently constituted and how it may evolve in the future. This panel will focus on a number of topics including platform hardware, delivery systems and their markets, the move into 3D computer graphics, virtuality in videogame design, overlapping areas of interactive entertainment, e.g. multimedia and theme parks, markets and content, and projected future developments.
{"title":"Videogame industry overview (panel session): technology, markets, content, future","authors":"J. Veeder, Mark Stephen Pierce, E. Jarvis, J. N. Latta, Heidi Therese Dangelmaier, Jez San","doi":"10.1145/218380.218521","DOIUrl":"https://doi.org/10.1145/218380.218521","url":null,"abstract":"Description While innovative (but secretive) early on, the videogame industry is now rejoining the computer graphics mainstream. In production, we see a rapid move from 2D to 3D animation, lowend to high-end production technologies, limited in-house tools to cutting-edge animation production techniques such as motion capture and 3D character animation. In game formats, we see experimentation with multi-player games, cooperative strategies, and virtual reality. Interactive entertainment overall is a rapidly expanding area with a great requirement for creative intervention and sophisticated computer graphics. The videogame industry has only very recently come into focus for many people in the computer graphics field, yet this industry is now driving much of the technology development in computer animation. Videogame development is being drawn deeper into the media mainstream. We have entered the age of the \"commercial transmedia supersystem\" where entertainment content is proliferated across multiple marketing opportunities: the game, the movie, the music CD, the book, the doll. Application developers have recently focused on an \"author once, deploy many\" imperative for cost effective production. As a new generation tackles the problem of interactive content production, their tools apply contemporary technical solutions to a process done with graph paper and assembler code not so many years ago. Videogame content may evolve as well, driven by the new delivery systems which underly market growth. For example, the corporate dreams of interactive television list the two largest consumer revenue areas as shopping and (then) games. Ubiquitous interactive television would certainly leverage today's limited multi-user games. New audiences means designing for new cognitive models of fun and taking advantage of recent research in how media products relate to gender and childhood development. Electronic gaming could evolve to encompass nationwide social events such as elections, celebrity trials, virtual participation in natural disasters, and so forth. All these new products, applications, and markets require technical, design, and artistic contributions for development, yet our skills, knowledge sets, and innovation must be translated into the new forms. To make this translation we must develop a coherent picture of how this industry is currently constituted and how it may evolve in the future. This panel will focus on a number of topics including platform hardware, delivery systems and their markets, the move into 3D computer graphics, virtuality in videogame design, overlapping areas of interactive entertainment, e.g. multimedia and theme parks, markets and content, and projected future developments.","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"11 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120851740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of prediction to eliminate or reduce the effects of system delays in Head-Mounted Display systems has been the subject of several recent papers. A variety of methods have been proposed but almost all the analysis has been empirical, making comparisons of results difficult and providing little direction to the designer of new systems. In this paper, we characterize the performance of two classes of head-motion predictors by analyzing them in the frequency domain. The first predictor is a polynomial extrapolation and the other is based on the Kalman filter. Our analysis shows that even with perfect, noise-free inputs, the error in predicted position grows rapidly with increasing prediction intervals and input signal frequencies. Given the spectra of the original head motion, this analysis estimates the spectra of the predicted motion, quantifying a predictor's performance on different systems and applications. Acceleration sensors are shown to be more useful to a predictor than velocity sensors. The methods described will enable designers to determine maximum acceptable system delay based on maximum tolerable error and the characteristics of user motions in the application. CR
{"title":"A frequency-domain analysis of head-motion prediction","authors":"Ronald T. Azuma, G. Bishop","doi":"10.1145/218380.218496","DOIUrl":"https://doi.org/10.1145/218380.218496","url":null,"abstract":"The use of prediction to eliminate or reduce the effects of system delays in Head-Mounted Display systems has been the subject of several recent papers. A variety of methods have been proposed but almost all the analysis has been empirical, making comparisons of results difficult and providing little direction to the designer of new systems. In this paper, we characterize the performance of two classes of head-motion predictors by analyzing them in the frequency domain. The first predictor is a polynomial extrapolation and the other is based on the Kalman filter. Our analysis shows that even with perfect, noise-free inputs, the error in predicted position grows rapidly with increasing prediction intervals and input signal frequencies. Given the spectra of the original head motion, this analysis estimates the spectra of the predicted motion, quantifying a predictor's performance on different systems and applications. Acceleration sensors are shown to be more useful to a predictor than velocity sensors. The methods described will enable designers to determine maximum acceptable system delay based on maximum tolerable error and the characteristics of user motions in the application. CR","PeriodicalId":447770,"journal":{"name":"Proceedings of the 22nd annual conference on Computer graphics and interactive techniques","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131280757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}