We present an improved z-buffer based CSG rendering algorithm, based on previous techniques using z-buffer parity based surface clipping. We show that while this type of algorithm has been reported as requiring O( ), (where is the number of primitives), an O( ) (where is depth complexity) algorithm may be substituted. For cases where is less than this translates into a significant performance gain. CR Categories: I.3.5 [Computing Methodologies]: Computer Graphics—Constructive solid geometry (CSG) I.3.3 [Computing Methodologies]: Computer Graphics—Display Algorithms I.3.1 [Computing Methodologies]: Computer Graphics—Hardware Architecture
{"title":"An improved z-buffer CSG rendering algorithm","authors":"Nigel Stewart, G. Leach, S. John","doi":"10.1145/285305.285308","DOIUrl":"https://doi.org/10.1145/285305.285308","url":null,"abstract":"We present an improved z-buffer based CSG rendering algorithm, based on previous techniques using z-buffer parity based surface clipping. We show that while this type of algorithm has been reported as requiring O( ), (where is the number of primitives), an O( ) (where is depth complexity) algorithm may be substituted. For cases where is less than this translates into a significant performance gain. CR Categories: I.3.5 [Computing Methodologies]: Computer Graphics—Constructive solid geometry (CSG) I.3.3 [Computing Methodologies]: Computer Graphics—Display Algorithms I.3.1 [Computing Methodologies]: Computer Graphics—Hardware Architecture","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122752634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the late 1990’s, graphics hardware is experiencing a dramatic board-to-chip integration reminiscent to the minicomputer-to-microprocessor revolution of the 1980’s. Today, mass-market PCs are beginning to match the 3D polygon and pixel rendering of a 1992 Silicon Graphics Reality EngineTM system. The extreme pace of technology evolution in the PC market is such that within 1 or 2 years the performance of a mainstream PC will be very close to the highest performance 3D workstations. At that time, the quality and performance demands will dictate serious changes in PC architecture as well as changes in rendering pipeline and algorithms. This paper will discuss several potential areas of change. A GENERAL PROBLEM STATEMENT The biggest focus of 3D graphics applications on the PC is interactive entertainment, or games. This workload is extremely dynamic, with continuous updating of geometry, textures, animation, lighting, and shading. Although in other applications such as Computer-AidedDesign (CAD), models may be static and retained mode or display list APIs may be used, it is common in games that geometry and textures change regularly. A good operating assumption is that everything changes every frame. The assumption of pervasive change puts a large burden on both the bandwidth and calculation capabilities of the graphics pipeline. GEOMETRY AND PIXEL THROUGHPUT As a baseline, we’ll start with some data and cycle counting of a reasonable workload for an interactive application. PC graphics hardware is capable of this throughput. As an example, this is a bandwidth analysis of a 400 MHz Intel Pentium IITM PC with an Nvidia RNA TNTTM graphics processor. This analysis does not derive from a specific application, but is simply a counting exercise. Many applications push one or more of these limits, but few programs stress all axes. For a typical application to achieve 1M triangles/second, 1 OOM 32bit pixels/second, 2 textures/pixel requires: 1 M triangles * 3 vertices/triangle * 32 bytes/vertex = 100 MB; triangle data crosses the bus 3-5 times (read, transform and written by the CPU, and read by the graphics processor, so simply copying triangle data requires 300-500 MB/second on the PC buses. 1OOM pixels * 8 bytes/pixel (32bit RGBA, 32bit Z/stencil) = 800 MB; with 50% overhead for RMW requires 1.2 GB/second 2 textures/pixel * 4 texelsltexture * 2 bytee a texture cache can create up to 4X reuse efficiency, so requires 400 MB/second Assumptions here include: 32-byte vertices are Direct3DTM TLVertices (X,Y,Z,R,G,B,A,F,SR,SG,SB,W) triangle setup is done on the graphics processor bilinear texture filtering 16bit texels are RSG6B5 50% of pixels written after Zbuffer read/compare Transferring triangle vertex data to the graphics processor from the CPU is commonly the bottleneck. This is different from typical workstations or the PCs of just 1 year ago, when transform and lighting calculation, fill rate, or texture rate were limiting factors. GEOMETRY REPRESENT
{"title":"Unsolved problems and opportunities for high-quality, high-performance 3D graphics on a PC platform","authors":"David B. Kirk","doi":"10.1145/285305.285306","DOIUrl":"https://doi.org/10.1145/285305.285306","url":null,"abstract":"In the late 1990’s, graphics hardware is experiencing a dramatic board-to-chip integration reminiscent to the minicomputer-to-microprocessor revolution of the 1980’s. Today, mass-market PCs are beginning to match the 3D polygon and pixel rendering of a 1992 Silicon Graphics Reality EngineTM system. The extreme pace of technology evolution in the PC market is such that within 1 or 2 years the performance of a mainstream PC will be very close to the highest performance 3D workstations. At that time, the quality and performance demands will dictate serious changes in PC architecture as well as changes in rendering pipeline and algorithms. This paper will discuss several potential areas of change. A GENERAL PROBLEM STATEMENT The biggest focus of 3D graphics applications on the PC is interactive entertainment, or games. This workload is extremely dynamic, with continuous updating of geometry, textures, animation, lighting, and shading. Although in other applications such as Computer-AidedDesign (CAD), models may be static and retained mode or display list APIs may be used, it is common in games that geometry and textures change regularly. A good operating assumption is that everything changes every frame. The assumption of pervasive change puts a large burden on both the bandwidth and calculation capabilities of the graphics pipeline. GEOMETRY AND PIXEL THROUGHPUT As a baseline, we’ll start with some data and cycle counting of a reasonable workload for an interactive application. PC graphics hardware is capable of this throughput. As an example, this is a bandwidth analysis of a 400 MHz Intel Pentium IITM PC with an Nvidia RNA TNTTM graphics processor. This analysis does not derive from a specific application, but is simply a counting exercise. Many applications push one or more of these limits, but few programs stress all axes. For a typical application to achieve 1M triangles/second, 1 OOM 32bit pixels/second, 2 textures/pixel requires: 1 M triangles * 3 vertices/triangle * 32 bytes/vertex = 100 MB; triangle data crosses the bus 3-5 times (read, transform and written by the CPU, and read by the graphics processor, so simply copying triangle data requires 300-500 MB/second on the PC buses. 1OOM pixels * 8 bytes/pixel (32bit RGBA, 32bit Z/stencil) = 800 MB; with 50% overhead for RMW requires 1.2 GB/second 2 textures/pixel * 4 texelsltexture * 2 bytee a texture cache can create up to 4X reuse efficiency, so requires 400 MB/second Assumptions here include: 32-byte vertices are Direct3DTM TLVertices (X,Y,Z,R,G,B,A,F,SR,SG,SB,W) triangle setup is done on the graphics processor bilinear texture filtering 16bit texels are RSG6B5 50% of pixels written after Zbuffer read/compare Transferring triangle vertex data to the graphics processor from the CPU is commonly the bottleneck. This is different from typical workstations or the PCs of just 1 year ago, when transform and lighting calculation, fill rate, or texture rate were limiting factors. GEOMETRY REPRESENT","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"389 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126968698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Dachille, K. Kreeger, Baoquan Chen, I. Bitter, A. Kaufman
Wt present a method Jor volume rendering of regular grids cclhic~h takes advantage of
本文提出了一种利用
{"title":"High-quality volume rendering using texture mapping hardware","authors":"F. Dachille, K. Kreeger, Baoquan Chen, I. Bitter, A. Kaufman","doi":"10.1145/285305.285315","DOIUrl":"https://doi.org/10.1145/285305.285315","url":null,"abstract":"Wt present a method Jor volume rendering of regular grids cclhic~h takes advantage of <?D texture mapping hardware currc,rhlly available on graphics workstations. Our method products accurate shadang for arbitrary and dynamically changing directionul lights, viewing parameters, and transfer funclior~. TIlis is achieved by hardware interpolating the data values and gradients before software classification and shadrng. The method works equally well for parallel and perspective projections. We present two approaches for OUT method: one which takes advantage of software ray casting optimitaIrons nnd another which takes advantage of hardware blending (Acceleration. CR Categories: 13.1 [Computer Graphics]: Hardware Architecture; 1.3.3 [Computer Graphics]: Picture/Image Generation; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Color, shading, shadowing, and texture","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125072069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional scenes have become an important form of content deliverable through the Internet. Standard formats such as Virtual Reality Modeling Language (VRML) make it possible to dynamically download complex scenes from a server directly to a web browser. However, limited bandwidth between servers and clients presents an obstacle to the availability of more complex scenes, since geometry and texture maps for a reasonably complex scene may take many minutes to transfer over a typical telephone modem link. This paper addresses one part of the bandwidth bottleneck, texture transmission. Current display methods transmit an entire texture to the client before it can be used for rendering. We present an alternative method which subdivides each texture into tiles, and dynamically determines on the client which tiles are visible to the user. Texture tiles are requested by the client in an order determined by the number of screen pixels affected by the texture tile, so that texture tiles which affect the greatest number of screen pixels are transmitted first. The client can render images during texture loading using tiles which have already been loaded. The tile visibility calculations take full account of occlusion and multiple texture image resolution levels, and are dynamically recalculated each time a new frame is rendered. We show how a few additions to the standard graphics hardware pipeline can add this capability without radical architecture changes, and with only moderate hardware cost. The addition of this capability makes it practical to use large textures even over relatively slow network connections.
{"title":"Texture tile visibility determination for dynamic texture loading","authors":"Michael E. Goss, Kei Yuasa","doi":"10.1145/285305.285312","DOIUrl":"https://doi.org/10.1145/285305.285312","url":null,"abstract":"Three-dimensional scenes have become an important form of content deliverable through the Internet. Standard formats such as Virtual Reality Modeling Language (VRML) make it possible to dynamically download complex scenes from a server directly to a web browser. However, limited bandwidth between servers and clients presents an obstacle to the availability of more complex scenes, since geometry and texture maps for a reasonably complex scene may take many minutes to transfer over a typical telephone modem link. This paper addresses one part of the bandwidth bottleneck, texture transmission. Current display methods transmit an entire texture to the client before it can be used for rendering. We present an alternative method which subdivides each texture into tiles, and dynamically determines on the client which tiles are visible to the user. Texture tiles are requested by the client in an order determined by the number of screen pixels affected by the texture tile, so that texture tiles which affect the greatest number of screen pixels are transmitted first. The client can render images during texture loading using tiles which have already been loaded. The tile visibility calculations take full account of occlusion and multiple texture image resolution levels, and are dynamically recalculated each time a new frame is rendered. We show how a few additions to the standard graphics hardware pipeline can add this capability without radical architecture changes, and with only moderate hardware cost. The addition of this capability makes it practical to use large textures even over relatively slow network connections.","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124862729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a second generation VIZARD system being capable of rendering 256 3 datasets at interactive frame-rates providing high image quality. In contrast to the previous VIZARD system, we use dedicated memory to store the dataset on the PCI card. Lossy and redundant compression of the dataset has been eliminated. Interactive c hange of shading and classiication parameters is enabled by m o v-ing shading and classiication from pre-processing into the pipeline. Memory bandwidth requirements are reduced by using a table of pre-calculated gradients. Thus, the gradient at sample location requires an eight neighborhood of voxels instead of a 32 neighborhood. We describe the generation of the discrete gradients and the impact on image quality. Furthermore, we present a parameterization of the ray in order to remove work-load from the pipeline. Finally, w e propose a PCI card serving as a platform for our second generation VIZARD. The proposed PCI card uses programmable devices to enable the implementation of other hardware accelerators as well.
{"title":"Vizard II, a PCI-card for real-time volume rendering","authors":"M. Meissner, Urs Kanus, W. Straßer","doi":"10.1145/285305.285313","DOIUrl":"https://doi.org/10.1145/285305.285313","url":null,"abstract":"In this paper we present a second generation VIZARD system being capable of rendering 256 3 datasets at interactive frame-rates providing high image quality. In contrast to the previous VIZARD system, we use dedicated memory to store the dataset on the PCI card. Lossy and redundant compression of the dataset has been eliminated. Interactive c hange of shading and classiication parameters is enabled by m o v-ing shading and classiication from pre-processing into the pipeline. Memory bandwidth requirements are reduced by using a table of pre-calculated gradients. Thus, the gradient at sample location requires an eight neighborhood of voxels instead of a 32 neighborhood. We describe the generation of the discrete gradients and the impact on image quality. Furthermore, we present a parameterization of the ray in order to remove work-load from the pipeline. Finally, w e propose a PCI card serving as a platform for our second generation VIZARD. The proposed PCI card uses programmable devices to enable the implementation of other hardware accelerators as well.","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127817503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper a new low cost bump mapping hardware is prcsented. The new hardware approach does not rely on per pixel lighting, but instead uses Gouraud interpolated triangles. The bump mapping effect is applied by blending the calculated per pixel bump map color onto the fragment’s color. This allows realtime animated distant light-sources to react on the specified bump map. The paper further investigates a number of different variants of recently proposed bump engines. These variants range from lowend PC solution to highest quality high-end solutions. CR
{"title":"Gouraud bump mapping","authors":"I. Ernst, H. Rüsseler, H. Schulz, O. Wittig","doi":"10.1145/285305.285311","DOIUrl":"https://doi.org/10.1145/285305.285311","url":null,"abstract":"In this paper a new low cost bump mapping hardware is prcsented. The new hardware approach does not rely on per pixel lighting, but instead uses Gouraud interpolated triangles. The bump mapping effect is applied by blending the calculated per pixel bump map color onto the fragment’s color. This allows realtime animated distant light-sources to react on the specified bump map. The paper further investigates a number of different variants of recently proposed bump engines. These variants range from lowend PC solution to highest quality high-end solutions. CR","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121067249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Texture mapping has become so ubiquitous in real-time graphics hardware that many systems are able to perform filtered texturing without any penalty in fill rate. The computation rates available in hardware have been outpacing the memory access rates, and texture systems are becoming constrained by memory bandwidth and latency. Caching in conjunction with prefetching can be used to alleviate this problem. In this paper, WC introduce a prefetching texture cache architecture designed to take advantage of the access characteristics of texture mapping. The structures needed are relatively simple and arc amenable to high clock rates. To quantify the robustness of our architecture, we identify a set of six scenes whose texture locality varies over nearly two orders of magnitude and a set 01 four memory systems with varying bandwidths and latencies. Through the use of a cycle-accurate simulation, we demonstrate that even in the presence of a high-latency memory system, our architecture can attain at least 97% of the performance of a zerolatency memory system. CR
{"title":"Prefetching in a texture cache architecture","authors":"Homan Igehy, Matthew Eldridge, Kekoa Proudfoot","doi":"10.1145/285305.285321","DOIUrl":"https://doi.org/10.1145/285305.285321","url":null,"abstract":"Texture mapping has become so ubiquitous in real-time graphics hardware that many systems are able to perform filtered texturing without any penalty in fill rate. The computation rates available in hardware have been outpacing the memory access rates, and texture systems are becoming constrained by memory bandwidth and latency. Caching in conjunction with prefetching can be used to alleviate this problem. In this paper, WC introduce a prefetching texture cache architecture designed to take advantage of the access characteristics of texture mapping. The structures needed are relatively simple and arc amenable to high clock rates. To quantify the robustness of our architecture, we identify a set of six scenes whose texture locality varies over nearly two orders of magnitude and a set 01 four memory systems with varying bandwidths and latencies. Through the use of a cycle-accurate simulation, we demonstrate that even in the presence of a high-latency memory system, our architecture can attain at least 97% of the performance of a zerolatency memory system. CR","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125158524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a purullel 2D mesh connected architecture with SIML) processing elements. The design allows for real-time volume rendering as well as interactive 30 segmentation and .1D feature extraction. Thas zs possible because the SIMD processing elements are programmable, a feature which also ullows the use of many different rendering algorithms. We present an algorithm which, with the addition of hardware re,sources, provides conflict free access to volume slices along any of the three major axes. The volume access conflict bus been the main reason why previous similar architectures could not perform real-time volume rendering. We present the performance of preliminary algorithms on a software simulator of the architecture design. CR Categories: C.1.2 [Processor Architectures]: Mult,iple Data Stream .4rchitectures (Multiprocessors)-Singleirlst,rllc:tion-streanl, multiple-data-stream processors (SIMD) ; 1.3.1 [Computer Graphics]: Hardware ArchitectureGraphics processors, Parallel processing; 1.4.6 [Image Proc.rssillg And Computer Vision]: Segmentation;
{"title":"PAVLOV: a programmable architecture for volume processing","authors":"K. Kreeger, A. Kaufman","doi":"10.1145/285305.285314","DOIUrl":"https://doi.org/10.1145/285305.285314","url":null,"abstract":"We present a purullel 2D mesh connected architecture with SIML) processing elements. The design allows for real-time volume rendering as well as interactive 30 segmentation and .1D feature extraction. Thas zs possible because the SIMD processing elements are programmable, a feature which also ullows the use of many different rendering algorithms. We present an algorithm which, with the addition of hardware re,sources, provides conflict free access to volume slices along any of the three major axes. The volume access conflict bus been the main reason why previous similar architectures could not perform real-time volume rendering. We present the performance of preliminary algorithms on a software simulator of the architecture design. CR Categories: C.1.2 [Processor Architectures]: Mult,iple Data Stream .4rchitectures (Multiprocessors)-Singleirlst,rllc:tion-streanl, multiple-data-stream processors (SIMD) ; 1.3.1 [Computer Graphics]: Hardware ArchitectureGraphics processors, Parallel processing; 1.4.6 [Image Proc.rssillg And Computer Vision]: Segmentation;","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126043517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For interactive rendering of large polygonal objects, fast visibility queries are necessary to quickly decide whether polygonal objects are visible and need to be rendered. None of the numerous published algorithms provide visibility performance for interactive rendering of large models. In this paper, we propose an OpenGL extension for fast occlusion queries. Added after the depth test stage of the OpenGL rendering pipeline, our algorithm provides fast queries to establish the occlusion of polygonal objects. Furthermore, hardware aspects of this proposal are discussed and possible implementations on two different graphics architectures are presented.
{"title":"Extending graphics hardware for occlusion queries in OpenGL","authors":"D. Bartz, M. Meissner, Tobias Hüttner","doi":"10.1145/285305.285317","DOIUrl":"https://doi.org/10.1145/285305.285317","url":null,"abstract":"For interactive rendering of large polygonal objects, fast visibility queries are necessary to quickly decide whether polygonal objects are visible and need to be rendered. None of the numerous published algorithms provide visibility performance for interactive rendering of large models. In this paper, we propose an OpenGL extension for fast occlusion queries. Added after the depth test stage of the OpenGL rendering pipeline, our algorithm provides fast queries to establish the occlusion of polygonal objects. Furthermore, hardware aspects of this proposal are discussed and possible implementations on two different graphics architectures are presented.","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116553479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Environment maps are widely used for approximating reflections in hardware-accelerated rendering applications. Unfortunately, the parameterizations for environment maps used in today’s graphics hardware severely undersample certain directions, and can thus not be used from multiple viewing directions. Other parameterizations exist, but require operations that would be too expensive for hardware implementations. In this paper we introduce an inexpensive new parameterization for environment maps that allows us to reuse the environment map for any given viewing direction. We describe how, under certain restrictions, these maps can be used today in standard OpenGL implementations. Furthermore, we explore how OpenGL could be extended to support this kind of environment map more directly. CR Categories: 1.3.1 [Computer Graphics]: Hardware Architecture-Graphics processors; 1.3.3 [Computer Graphics]: Picture/Image Generation-Bitmap and framebuffer operations; 1.3.6 [Computer Graphics]: Methodology and Techniques--Standards 1.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism-Color, Shading, Shadowing and Texture 1.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ~-Sampling
{"title":"View-independent environment maps","authors":"W. Heidrich, H. Seidel","doi":"10.1145/285305.285310","DOIUrl":"https://doi.org/10.1145/285305.285310","url":null,"abstract":"Environment maps are widely used for approximating reflections in hardware-accelerated rendering applications. Unfortunately, the parameterizations for environment maps used in today’s graphics hardware severely undersample certain directions, and can thus not be used from multiple viewing directions. Other parameterizations exist, but require operations that would be too expensive for hardware implementations. In this paper we introduce an inexpensive new parameterization for environment maps that allows us to reuse the environment map for any given viewing direction. We describe how, under certain restrictions, these maps can be used today in standard OpenGL implementations. Furthermore, we explore how OpenGL could be extended to support this kind of environment map more directly. CR Categories: 1.3.1 [Computer Graphics]: Hardware Architecture-Graphics processors; 1.3.3 [Computer Graphics]: Picture/Image Generation-Bitmap and framebuffer operations; 1.3.6 [Computer Graphics]: Methodology and Techniques--Standards 1.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism-Color, Shading, Shadowing and Texture 1.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ~-Sampling","PeriodicalId":298241,"journal":{"name":"Proceedings of the ACM SIGGRAPH/EUROGRAPHICS workshop on Graphics hardware","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120973834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}