In Augmented Reality, see-through HMDs superimpose virtual 3D objects on the real world. This technology has the potential to enhance a user's perception and interaction with the real world. However, many Augmented Reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. In previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. This paper offers improved registration in two areas. First, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. An optoelectronic tracker provides the required range and accuracy. Three calibration steps determine the viewing parameters. Second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. Inertial sensors mounted on the HMD aid head-motion prediction. Accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. On average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. Future steps that may further improve registration are outlined.
{"title":"Improving static and dynamic registration in an optical see-through HMD","authors":"Ronald T. Azuma, G. Bishop","doi":"10.1145/192161.192199","DOIUrl":"https://doi.org/10.1145/192161.192199","url":null,"abstract":"In Augmented Reality, see-through HMDs superimpose virtual 3D objects on the real world. This technology has the potential to enhance a user's perception and interaction with the real world. However, many Augmented Reality applications will not be accepted until we can accurately register virtual objects with their real counterparts. In previous systems, such registration was achieved only from a limited range of viewpoints, when the user kept his head still. This paper offers improved registration in two areas. First, our system demonstrates accurate static registration across a wide variety of viewing angles and positions. An optoelectronic tracker provides the required range and accuracy. Three calibration steps determine the viewing parameters. Second, dynamic errors that occur when the user moves his head are reduced by predicting future head locations. Inertial sensors mounted on the HMD aid head-motion prediction. Accurate determination of prediction distances requires low-overhead operating systems and eliminating unpredictable sources of latency. On average, prediction with inertial sensors produces errors 2-3 times lower than prediction without inertial sensors and 5-10 times lower than using no prediction at all. Future steps that may further improve registration are outlined.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126423088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As common as imaging operations are, the literature contains little about how to build systems for image computation. This paper presents a system which addresses the major issues of image computing. The system includes an algorithm for performing imaging operations which guarantees that we only compute those regions of the image that will affect the result. The paper also discusses several other issues critical when creating a flexible image computing environment and describes solutions for these problems in the context of our model. These issues include how one handles images of any resolution and how one works in arbitrary coordinate systems. It also includes a discussion of the standard memory models, a presentation of a new model, and a discussion of each one's advantages and disadvantages.
{"title":"A model for efficient and flexible image computing","authors":"Michael A. Shantzis","doi":"10.1145/192161.192191","DOIUrl":"https://doi.org/10.1145/192161.192191","url":null,"abstract":"As common as imaging operations are, the literature contains little about how to build systems for image computation. This paper presents a system which addresses the major issues of image computing. The system includes an algorithm for performing imaging operations which guarantees that we only compute those regions of the image that will affect the result. The paper also discusses several other issues critical when creating a flexible image computing environment and describes solutions for these problems in the context of our model. These issues include how one handles images of any resolution and how one works in arbitrary coordinate systems. It also includes a discussion of the standard memory models, a presentation of a new model, and a discussion of each one's advantages and disadvantages.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Meyer, Sally Rosenthal, Stephen R. Johnson, M. L. Jepsen, Douglas Davis
The panel is made up of artists who create large-scale works using technology. We discuss the future of artistic techniques which incorporate technology, in order to extend the possibilities of human interaction with the machine and with other people. Technology and artistic creation have always been closely linked, from the invention of painting, through the development of printing, up through the present, which offers new possibilities for people to interact with the technology, with the work, and with each other. However, much current artistic work is still based on traditional notions of electronic publication: one viewer/reader working with a work of art contained on one computer. Most VR systems are still walk-throughs, with little or no ability to interact with the created environment or with other people. The much-hyped CD-ROMS that are becoming the medium of choice for rock stars still do not provide even the level of intimacy and interaction which a relatively lowtech concert can provide. As social beings, we need shared experience, such as that generated by the spectacular. But the networked festival, the “digital convergence,” is happening. The World Wide Web almost tripled in size between November and December 1993, and more information is being linked into it constantly. On-line communities, such as MUDs and their relations, have become an explosion of creative interaction, and are being used for real-time collaboration, including hypertext creative writing and other art projects — the development of “folk programming.” As communication bandwidth becomes cheaper, video teleconferencing and collaboration across cultural boundaries becomes a common occurrence. And the plummeting costs of hardware and networking allow the development of ubiquitous computing, augmenting reality and communication by making the surrounding environment become reactive to its participants. The members of this panel are each exploring ways to extend human interaction both with technology and with other people, by using the technology as an integral part of their art. Where existing tools are not useful or appropriate, they have extended them or built their own. Their art is art on the large scale, using technology to create artistic endeavors beyond the scale of the individual, to the scale of human communities..
{"title":"Art and technology: very large scale integration","authors":"Tom Meyer, Sally Rosenthal, Stephen R. Johnson, M. L. Jepsen, Douglas Davis","doi":"10.1145/192161.192296","DOIUrl":"https://doi.org/10.1145/192161.192296","url":null,"abstract":"The panel is made up of artists who create large-scale works using technology. We discuss the future of artistic techniques which incorporate technology, in order to extend the possibilities of human interaction with the machine and with other people. Technology and artistic creation have always been closely linked, from the invention of painting, through the development of printing, up through the present, which offers new possibilities for people to interact with the technology, with the work, and with each other. However, much current artistic work is still based on traditional notions of electronic publication: one viewer/reader working with a work of art contained on one computer. Most VR systems are still walk-throughs, with little or no ability to interact with the created environment or with other people. The much-hyped CD-ROMS that are becoming the medium of choice for rock stars still do not provide even the level of intimacy and interaction which a relatively lowtech concert can provide. As social beings, we need shared experience, such as that generated by the spectacular. But the networked festival, the “digital convergence,” is happening. The World Wide Web almost tripled in size between November and December 1993, and more information is being linked into it constantly. On-line communities, such as MUDs and their relations, have become an explosion of creative interaction, and are being used for real-time collaboration, including hypertext creative writing and other art projects — the development of “folk programming.” As communication bandwidth becomes cheaper, video teleconferencing and collaboration across cultural boundaries becomes a common occurrence. And the plummeting costs of hardware and networking allow the development of ubiquitous computing, augmenting reality and communication by making the surrounding environment become reactive to its participants. The members of this panel are each exploring ways to extend human interaction both with technology and with other people, by using the technology as an integral part of their art. Where existing tools are not useful or appropriate, they have extended them or built their own. Their art is art on the large scale, using technology to create artistic endeavors beyond the scale of the individual, to the scale of human communities..","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115641650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several existing volume rendering algorithms operate by factoring the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implementation running on an SGI Indigo workstation renders a 2563 voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable.
{"title":"Fast volume rendering using a shear-warp factorization of the viewing transformation","authors":"P. Lacroute, M. Levoy","doi":"10.1145/192161.192283","DOIUrl":"https://doi.org/10.1145/192161.192283","url":null,"abstract":"Several existing volume rendering algorithms operate by factoring the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implementation running on an SGI Indigo workstation renders a 2563 voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129144882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Specifying the motion of an animated linked figure such that it achieves given tasks (e.g., throwing a ball into a basket) and performs the tasks in a realistic fashion (e.g., gracefully, and following physical laws such as gravity) has been an elusive goal for computer animators. The spacetime constraints paradigm has been shown to be a valuable approach to this problem, but it suffers from computational complexity growth as creatures and tasks approach those one would like to animate. The complexity is shown to be, in part, due to the choice of finite basis with which to represent the trajectories of the generalized degrees of freedom. This paper describes new features to the spacetime constraints paradigm to address this problem. The functions through time of the generalized degrees of freedom are reformulated in a hierarchical wavelet representation. This provides a means to automatically add detailed motion only where it is required, thus minimizing the number of discrete variables. In addition the wavelet basis is shown to lead to better conditioned systems of equations and thus faster convergence.
{"title":"Hierarchical spacetime control","authors":"Zicheng Liu, S. Gortler, Michael F. Cohen","doi":"10.1145/192161.192169","DOIUrl":"https://doi.org/10.1145/192161.192169","url":null,"abstract":"Specifying the motion of an animated linked figure such that it achieves given tasks (e.g., throwing a ball into a basket) and performs the tasks in a realistic fashion (e.g., gracefully, and following physical laws such as gravity) has been an elusive goal for computer animators. The spacetime constraints paradigm has been shown to be a valuable approach to this problem, but it suffers from computational complexity growth as creatures and tasks approach those one would like to animate. The complexity is shown to be, in part, due to the choice of finite basis with which to represent the trajectories of the generalized degrees of freedom. This paper describes new features to the spacetime constraints paradigm to address this problem. The functions through time of the generalized degrees of freedom are reformulated in a hierarchical wavelet representation. This provides a means to automatically add detailed motion only where it is required, thus minimizing the number of discrete variables. In addition the wavelet basis is shown to lead to better conditioned systems of equations and thus faster convergence.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124483470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A precise shading model is required to display realistic images. Recently research on global illumination has been widespread. In global illumination, problems of diffuse reflection have been solved fairly well, but some optical problems after specular reflection and refraction still remain. Some natural phenomena stand out in reflected/refracted light from the wave surface of water. Refracted light from water surface converges and diverges, and creates shafts of light due to scattered light from particles. The color of the water is influenced by scattering/absorption effects of water molecules and suspensions. For these effects, the intensity and direction of incident light to particles plays an important role, and it is difficult to calculate them in conventional ray-tracing because light refracts when passing through waves. Therefore, the pre-processing tracing from light sources is necessary. The method proposed here can effectively calculate optical effects, shaft of light, caustics, and color of the water without such pre-processing by using a scanline Z-buffer and accumulation buffer.
{"title":"Method of displaying optical effects within water using accumulation buffer","authors":"T. Nishita, E. Nakamae","doi":"10.1145/192161.192261","DOIUrl":"https://doi.org/10.1145/192161.192261","url":null,"abstract":"A precise shading model is required to display realistic images. Recently research on global illumination has been widespread. In global illumination, problems of diffuse reflection have been solved fairly well, but some optical problems after specular reflection and refraction still remain. Some natural phenomena stand out in reflected/refracted light from the wave surface of water. Refracted light from water surface converges and diverges, and creates shafts of light due to scattered light from particles. The color of the water is influenced by scattering/absorption effects of water molecules and suspensions. For these effects, the intensity and direction of incident light to particles plays an important role, and it is difficult to calculate them in conventional ray-tracing because light refracts when passing through waves. Therefore, the pre-processing tracing from light sources is necessary. The method proposed here can effectively calculate optical effects, shaft of light, caustics, and color of the water without such pre-processing by using a scanline Z-buffer and accumulation buffer.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120952710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes efficient algorithms for the placement and distortion of textures. The textures include surface color maps and environment maps. Affine transformations of a texture, as well as localized warps, are used to align features in the texture with features of the model. Image-space caches are used to enable texture placement in real time.
{"title":"Efficient techniques for interactive texture placement","authors":"Peter Litwinowicz, G. Miller","doi":"10.1145/192161.192187","DOIUrl":"https://doi.org/10.1145/192161.192187","url":null,"abstract":"This paper describes efficient algorithms for the placement and distortion of textures. The textures include surface color maps and environment maps. Affine transformations of a texture, as well as localized warps, are used to align features in the texture with features of the model. Image-space caches are used to enable texture placement in real time.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116458498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A method for rendering lightning using conventional raytracing techniques is discussed. The approach taken is directed at producing aesthetic images for animation, rather than providing a realistic physically based model for rendering. A particle system is used to generate the path of the lightning channel, and subsequently to animate the lightning. A technique, using implicit surfaces, is introduced for illuminating objects struck by lightning.
{"title":"Visual simulation of lightning","authors":"T. Reed, B. Wyvill","doi":"10.1145/192161.192256","DOIUrl":"https://doi.org/10.1145/192161.192256","url":null,"abstract":"A method for rendering lightning using conventional raytracing techniques is discussed. The approach taken is directed at producing aesthetic images for animation, rather than providing a realistic physically based model for rendering. A particle system is used to generate the path of the lightning channel, and subsequently to animate the lightning. A technique, using implicit surfaces, is introduced for illuminating objects struck by lightning.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116540772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walt Bransford, M. Klein, Craig Moody, David Reed, M. Rothschild
intertwined with the patterns of economic change. The technologies of SIGGRAPH are understandable. The economic environment in which they thrive is not. The opportunities of the future are connected to our technological legacy. But forces more biological than mechanical are beginning to shape business and economic methods. These forces affect the diffusion of computer graphics and other advanced technologies into the commercial landscape. They are radically changing competitive and social environments. Businesses are struggling to find and create markets for a technology with staggering potential. Society is undergoing yet another transformation of communications as well as personal habits and tastes. This panel will try to make sense out of some of the elements that make up this energetic and complex economic arena. A historian of American technology and business will review the continuum in which invention and innovation continues to flourish. An economic model more suited to the “information age” will be introduced to SIGGRAPH at this panel. Against this background is an entrepreneur’s view of turning innovations into economic effect by giving them life in today’s market place. Finally, without invention all of this would be meaningless. A researcher’s view of the impact of these elements on pre-competitive computer graphics-based products concludes the panel. A new economic perspective on computer graphics-based technologies is additional way to identify opportunity. These fresh and diverse views will try to establish that perspective.
{"title":"Computer graphics and economic transformations","authors":"Walt Bransford, M. Klein, Craig Moody, David Reed, M. Rothschild","doi":"10.1145/192161.192288","DOIUrl":"https://doi.org/10.1145/192161.192288","url":null,"abstract":"intertwined with the patterns of economic change. The technologies of SIGGRAPH are understandable. The economic environment in which they thrive is not. The opportunities of the future are connected to our technological legacy. But forces more biological than mechanical are beginning to shape business and economic methods. These forces affect the diffusion of computer graphics and other advanced technologies into the commercial landscape. They are radically changing competitive and social environments. Businesses are struggling to find and create markets for a technology with staggering potential. Society is undergoing yet another transformation of communications as well as personal habits and tastes. This panel will try to make sense out of some of the elements that make up this energetic and complex economic arena. A historian of American technology and business will review the continuum in which invention and innovation continues to flourish. An economic model more suited to the “information age” will be introduced to SIGGRAPH at this panel. Against this background is an entrepreneur’s view of turning innovations into economic effect by giving them life in today’s market place. Finally, without invention all of this would be meaningless. A researcher’s view of the impact of these elements on pre-competitive computer graphics-based products concludes the panel. A new economic perspective on computer graphics-based technologies is additional way to identify opportunity. These fresh and diverse views will try to establish that perspective.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126976595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a fast, practical algorithm to compute the shadow boundaries in a polyhedral scene illuminated by a polygonal light source. The shadow boundaries divide the faces of the scene into regions such that the structure or “aspect” of the visible area of the light source is constant within each region. The paper also describes a fast, practical algorithm to compute the structure of the visible light source in each region. Both algorithms exploit spatial coherence and are the most efficient yet developed. Given the structure of the visible light source in a region, queries of the form “What specific areas of the light source are visible?” can be answered almost instantly from any point in the region. This speeds up by several orders of magnitude the accurate computation of first level diffuse reflections due to an area light source. Furthermore, the shadow boundaries form a good initial decomposition of the scene for global illumination computations.
{"title":"Fast computation of shadow boundaries using spatial coherence and backprojections","authors":"A. J. Stewart, S. Ghali","doi":"10.1145/192161.192210","DOIUrl":"https://doi.org/10.1145/192161.192210","url":null,"abstract":"This paper describes a fast, practical algorithm to compute the shadow boundaries in a polyhedral scene illuminated by a polygonal light source. The shadow boundaries divide the faces of the scene into regions such that the structure or “aspect” of the visible area of the light source is constant within each region. The paper also describes a fast, practical algorithm to compute the structure of the visible light source in each region. Both algorithms exploit spatial coherence and are the most efficient yet developed. Given the structure of the visible light source in a region, queries of the form “What specific areas of the light source are visible?” can be answered almost instantly from any point in the region. This speeds up by several orders of magnitude the accurate computation of first level diffuse reflections due to an area light source. Furthermore, the shadow boundaries form a good initial decomposition of the scene for global illumination computations.","PeriodicalId":151245,"journal":{"name":"Proceedings of the 21st annual conference on Computer graphics and interactive techniques","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115360449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}