The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we demonstrate a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard output-referred image. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data is also compressed, and the composite is delivered in a standard JPEG wrapper. To naïve software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. Our method further extends the color range to encompass the visible gamut, enabling a new generation of display devices that are just beginning to enter the market.
{"title":"JPEG-HDR: a backwards-compatible, high dynamic range extension to JPEG","authors":"G. Ward, Maryann Simmons","doi":"10.1145/1198555.1198708","DOIUrl":"https://doi.org/10.1145/1198555.1198708","url":null,"abstract":"The transition from traditional 24-bit RGB to high dynamic range (HDR) images is hindered by excessively large file formats with no backwards compatibility. In this paper, we demonstrate a simple approach to HDR encoding that parallels the evolution of color television from its grayscale beginnings. A tone-mapped version of each HDR original is accompanied by restorative information carried in a subband of a standard output-referred image. This subband contains a compressed ratio image, which when multiplied by the tone-mapped foreground, recovers the HDR original. The tone-mapped image data is also compressed, and the composite is delivered in a standard JPEG wrapper. To naïve software, the image looks like any other, and displays as a tone-mapped version of the original. To HDR-enabled software, the foreground image is merely a tone-mapping suggestion, as the original pixel data are available by decoding the information in the subband. Our method further extends the color range to encompass the visible gamut, enabling a new generation of display devices that are just beginning to enter the market.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130656168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When characterizing a shape or changes in shape we must first ask, what can we measure about a shape? For example, for a region in ∫3 we may ask for its volume or its surface area. If the object at hand undergoes deformation due to forces acting on it we may need to formulate the laws governing the change in shape in terms of measurable quantities and their change over time. Usually such measurable quantities for a shape are defined with the help of integral calculus and often require some amount of smoothness on the object to be well defined. In this chapter we will take a more abstract approach to the question of measurable quantities which will allow us to define notions such as mean curvature integrals and the curvature tensor for piecewise linear meshes without having to worry about the meaning of second derivatives in settings in which they do not exist. In fact in this chapter we will give an account of a classical result due to Hadwiger, which shows that for a convex, compact set in Rn there are only n + 1 unique measurements if we require that the measurements be invariant under Euclidian motions (and satisfy certain "sanity" conditions). We will see how these measurements are constructed in a very straightforward and elementary manner and that they can be read off from a characteristic polynomial due to Steiner. This polynomial describes the volume of a family of shapes which arise when we "grow" a given shape. As a practical tool arising from these consideration we will see that there is a well defined notion of the curvature tensor for piece-wise linear meshes and we will see very simple formulas for quantities needed in physical simulation with piecewise linear meshes. Much of the treatment here will initially be limited to convex bodies to keep things simple. This limitation that will be removed at the very end.
{"title":"What can we measure?","authors":"P. Schröder","doi":"10.1145/1198555.1198661","DOIUrl":"https://doi.org/10.1145/1198555.1198661","url":null,"abstract":"When characterizing a shape or changes in shape we must first ask, what can we measure about a shape? For example, for a region in ∫3 we may ask for its volume or its surface area. If the object at hand undergoes deformation due to forces acting on it we may need to formulate the laws governing the change in shape in terms of measurable quantities and their change over time. Usually such measurable quantities for a shape are defined with the help of integral calculus and often require some amount of smoothness on the object to be well defined. In this chapter we will take a more abstract approach to the question of measurable quantities which will allow us to define notions such as mean curvature integrals and the curvature tensor for piecewise linear meshes without having to worry about the meaning of second derivatives in settings in which they do not exist. In fact in this chapter we will give an account of a classical result due to Hadwiger, which shows that for a convex, compact set in Rn there are only n + 1 unique measurements if we require that the measurements be invariant under Euclidian motions (and satisfy certain \"sanity\" conditions). We will see how these measurements are constructed in a very straightforward and elementary manner and that they can be read off from a characteristic polynomial due to Steiner. This polynomial describes the volume of a family of shapes which arise when we \"grow\" a given shape. As a practical tool arising from these consideration we will see that there is a well defined notion of the curvature tensor for piece-wise linear meshes and we will see very simple formulas for quantities needed in physical simulation with piecewise linear meshes. Much of the treatment here will initially be limited to convex bodies to keep things simple. This limitation that will be removed at the very end.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114698802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the Virtual Showcase, a new multiviewer augmented reality display device that has the same form factor as a real showcase traditionally used for museum exhibits.
{"title":"The virtual showcase","authors":"O. Bimber, B. Fröhlich, D. Schmalstieg","doi":"10.1145/1198555.1198713","DOIUrl":"https://doi.org/10.1145/1198555.1198713","url":null,"abstract":"We present the Virtual Showcase, a new multiviewer augmented reality display device that has the same form factor as a real showcase traditionally used for museum exhibits.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124041763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhinav Dayal, Cliff Woolley, B. Watson, D. Luebke
We propose an adaptive form of frameless rendering with the potential to dramatically increase rendering speed over conventional interactive rendering approaches. Without the rigid sampling patterns of framed renderers, sampling and reconstruction can adapt with very fine granularity to spatio-temporal color change. A sampler uses closed-loop feedback to guide sampling toward edges or motion in the image. Temporally deep buffers store all the samples created over a short time interval for use in reconstruction and as sampler feedback. GPU-based reconstruction responds both to sampling density and space-time color gradients. Where the displayed scene is static, spatial color change dominates and older samples are given significant weight in reconstruction, resulting in sharper and eventually antialiased images. Where the scene is dynamic, more recent samples are emphasized, resulting in less sharp but more up-to-date images. We also use sample reprojection to improve reconstruction and guide sampling toward occlusion edges, undersampled regions, and specular highlights. In simulation our frameless renderer requires an order of magnitude fewer samples than traditional rendering of similar visual quality (as measured by RMS error), while introducing overhead amounting to 15% of computation time.
{"title":"Adaptive frameless rendering","authors":"Abhinav Dayal, Cliff Woolley, B. Watson, D. Luebke","doi":"10.1145/1198555.1198763","DOIUrl":"https://doi.org/10.1145/1198555.1198763","url":null,"abstract":"We propose an adaptive form of frameless rendering with the potential to dramatically increase rendering speed over conventional interactive rendering approaches. Without the rigid sampling patterns of framed renderers, sampling and reconstruction can adapt with very fine granularity to spatio-temporal color change. A sampler uses closed-loop feedback to guide sampling toward edges or motion in the image. Temporally deep buffers store all the samples created over a short time interval for use in reconstruction and as sampler feedback. GPU-based reconstruction responds both to sampling density and space-time color gradients. Where the displayed scene is static, spatial color change dominates and older samples are given significant weight in reconstruction, resulting in sharper and eventually antialiased images. Where the scene is dynamic, more recent samples are emphasized, resulting in less sharp but more up-to-date images. We also use sample reprojection to improve reconstruction and guide sampling toward occlusion edges, undersampled regions, and specular highlights. In simulation our frameless renderer requires an order of magnitude fewer samples than traditional rendering of similar visual quality (as measured by RMS error), while introducing overhead amounting to 15% of computation time.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127774067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Course Description We present an introduction to the digital modeling of materials for realistic image synthesis. We present a visual tour of images of real materials, and consider how they are classified by the effects that need to be modeled to realistically render them. Essential appearance concepts such as diffuse, specular, subsurface scattering and wave effects will be defined and illustrated. We will then discuss popularly used numerical models such as Ward, Lafortune and Cook-Torrance. We will discuss these in terms of the effects they capture and the visual impact of the parameters of each model, and will not cover their mathematical derivations. We will conclude with the consideration of models that simulate the processing or aging of materials to predict their variation with time. The goal of the course is to provide an introduction to translating observation of materials in the real world into model parameters and/or code for synthesizing realistic images. Course Prerequisites The course requires only an introductory level of familiarity with computer graphics from either a previous course or practical experience. We will assume that the students understand basic terms and ideas such as setting a pixel color by specifying values of red, green and blue, and projecting a triangle onto a set of pixels given the specification of a virtual pinhole camera. Instructors Holly Rushmeier is a Professor of Computer Science at Yale University. Since receiving the Ph.D. from Cornell in 1988, and she has conducted research in global illumination, data visualization, applications of perception, 3D scanning, and applications of computer graphics in cultural heritage. She has published in SIGGRAPH, ACM TOG, IEEE CG&A and IEEE TVCG. Over the past 15 years, she has organized SIGGRAPH courses on radiosity, global illumination and a scanning case study, and has lectured in SIGGRAPH courses on capturing surface properties and applying perceptual principles to rendering. Julie Dorsey is a Professor of Computer Science at Yale University, where she teaches computer graphics. Before joining the Yale faculty, she was a tenured faculty member at MIT. She received undergraduate (BS, BArch 1987) and graduate (MS 1990, PhD 1993) degrees from Cornell University. Her research interests include photorealistic image synthesis, material and texture models, illustration techniques, and interactive visualization of complex scenes. In addition to serving on numerous conference program committees, she is a member of the editorial board of IEEE Transactions on Visualization and Computer Graphics and is an area …
{"title":"Digital modeling of the appearance of materials","authors":"Julie Dorsey, H. Rushmeier","doi":"10.1145/1198555.1198694","DOIUrl":"https://doi.org/10.1145/1198555.1198694","url":null,"abstract":"Course Description We present an introduction to the digital modeling of materials for realistic image synthesis. We present a visual tour of images of real materials, and consider how they are classified by the effects that need to be modeled to realistically render them. Essential appearance concepts such as diffuse, specular, subsurface scattering and wave effects will be defined and illustrated. We will then discuss popularly used numerical models such as Ward, Lafortune and Cook-Torrance. We will discuss these in terms of the effects they capture and the visual impact of the parameters of each model, and will not cover their mathematical derivations. We will conclude with the consideration of models that simulate the processing or aging of materials to predict their variation with time. The goal of the course is to provide an introduction to translating observation of materials in the real world into model parameters and/or code for synthesizing realistic images. Course Prerequisites The course requires only an introductory level of familiarity with computer graphics from either a previous course or practical experience. We will assume that the students understand basic terms and ideas such as setting a pixel color by specifying values of red, green and blue, and projecting a triangle onto a set of pixels given the specification of a virtual pinhole camera. Instructors Holly Rushmeier is a Professor of Computer Science at Yale University. Since receiving the Ph.D. from Cornell in 1988, and she has conducted research in global illumination, data visualization, applications of perception, 3D scanning, and applications of computer graphics in cultural heritage. She has published in SIGGRAPH, ACM TOG, IEEE CG&A and IEEE TVCG. Over the past 15 years, she has organized SIGGRAPH courses on radiosity, global illumination and a scanning case study, and has lectured in SIGGRAPH courses on capturing surface properties and applying perceptual principles to rendering. Julie Dorsey is a Professor of Computer Science at Yale University, where she teaches computer graphics. Before joining the Yale faculty, she was a tenured faculty member at MIT. She received undergraduate (BS, BArch 1987) and graduate (MS 1990, PhD 1993) degrees from Cornell University. Her research interests include photorealistic image synthesis, material and texture models, illustration techniques, and interactive visualization of complex scenes. In addition to serving on numerous conference program committees, she is a member of the editorial board of IEEE Transactions on Visualization and Computer Graphics and is an area …","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125475744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to articulated rigid body dynamics: Copyright restrictions prevent ACM from providing the full text for this work.","authors":"Sunil Hadap, Vangelis Kokkevis","doi":"10.1145/1198555.1198559","DOIUrl":"https://doi.org/10.1145/1198555.1198559","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}