Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.
{"title":"Assorted pixels: multi-sampled imaging with structural models","authors":"S. Nayar, S. Narasimhan","doi":"10.1145/1198555.1198563","DOIUrl":"https://doi.org/10.1145/1198555.1198563","url":null,"abstract":"Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"os-16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127762740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Digital face cloning","authors":"","doi":"10.1145/3245703","DOIUrl":"https://doi.org/10.1145/3245703","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"276 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127549137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High dynamic range imaging and image-based lighting","authors":"E. Reinhard, P. Debevec, G. Ward, S. Pattanaik","doi":"10.1145/1198555.1198707","DOIUrl":"https://doi.org/10.1145/1198555.1198707","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127431258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Pulli, Jani Vaarala, Ville Miettinen, T. Aarnio, M. Callow
Mobile phones offer exciting new opportunities for graphics application developers. However, they also have sig-nificant limitations compared to traditional desktop graphics environments, including absence of dedicated graphics hardware, limited memory (both RAM and ROM), limited communications bandwidth, and lack of floating point hardware. Existing graphics APIs ignore these limitations and thus are infeasible to implement in embedded devices. Thiscourse presents two new 3D graphics APIs that address the special needs and constraints of mobile/embedded platforms: OpenGL ES and M3G. OpenGL ES is a light-weight version of the well-known workstation standard, offering a subset of OpenGL 1.5 capability plus support for fixed point arithmetic. M3G, Mobile 3D Graphics API for Java MIDP (Mobile Information Device Profile), also known as JSR-184, provides scene graph and animation support, binary file format, and immediate mode rendering that bypasses scene graphs. These APIs provide powerful graphics capabilities in a form that fits well on today’s devices, and will support hardware acceleration in the future. The course begins with a discussion of the target environments and their limitations, and general techniques for coping with platform/environment constraints (such as fixed point arithmetic). This is followed by detailed presen-tations of the APIs. For each API, we describe the included functionality and compare it to related workstation standards, explaining what was left out and why. We also discuss practical aspects of working with the APIs on the target platforms, and present strategies for porting existing applications and creating new ones. Categories and Subject Descriptors (according to ACM CCS) : I.3.6 [Computer Graphics]: Standards
{"title":"Developing mobile 3D applications with OpenGL ES and M3G","authors":"K. Pulli, Jani Vaarala, Ville Miettinen, T. Aarnio, M. Callow","doi":"10.1145/1198555.1198730","DOIUrl":"https://doi.org/10.1145/1198555.1198730","url":null,"abstract":"Mobile phones offer exciting new opportunities for graphics application developers. However, they also have sig-nificant limitations compared to traditional desktop graphics environments, including absence of dedicated graphics hardware, limited memory (both RAM and ROM), limited communications bandwidth, and lack of floating point hardware. Existing graphics APIs ignore these limitations and thus are infeasible to implement in embedded devices. Thiscourse presents two new 3D graphics APIs that address the special needs and constraints of mobile/embedded platforms: OpenGL ES and M3G. OpenGL ES is a light-weight version of the well-known workstation standard, offering a subset of OpenGL 1.5 capability plus support for fixed point arithmetic. M3G, Mobile 3D Graphics API for Java MIDP (Mobile Information Device Profile), also known as JSR-184, provides scene graph and animation support, binary file format, and immediate mode rendering that bypasses scene graphs. These APIs provide powerful graphics capabilities in a form that fits well on today’s devices, and will support hardware acceleration in the future. The course begins with a discussion of the target environments and their limitations, and general techniques for coping with platform/environment constraints (such as fixed point arithmetic). This is followed by detailed presen-tations of the APIs. For each API, we describe the included functionality and compare it to related workstation standards, explaining what was left out and why. We also discuss practical aspects of working with the APIs on the target platforms, and present strategies for porting existing applications and creating new ones. Categories and Subject Descriptors (according to ACM CCS) : I.3.6 [Computer Graphics]: Standards","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129187969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Yoo, B. Morse, K. Subramanian, P. Rheingans, M. Ackerman
We describe the use of variational implicit surfaces (level sets of an embedded generating function modeled using radial basis interpolants) in anatomic modeling. This technique allows the practitioner to employ sparsely and unevenly sampled data to represent complex biological surfaces, including data acquired as a series of non-parallel image slices. The method inherently accommodates interpolation across irregular spans. In addition, shapes with arbitrary topology are easily represented without interpolation or aliasing errors arising from discrete sampling. To demonstrate the medical use of variational implicit surfaces, we present the reconstruction of the inner surfaces of blood vessels from a series of endovascular ultrasound images.
{"title":"Anatomic modeling from unstructured samples using variational implicit surfaces","authors":"T. Yoo, B. Morse, K. Subramanian, P. Rheingans, M. Ackerman","doi":"10.1145/1198555.1198654","DOIUrl":"https://doi.org/10.1145/1198555.1198654","url":null,"abstract":"We describe the use of variational implicit surfaces (level sets of an embedded generating function modeled using radial basis interpolants) in anatomic modeling. This technique allows the practitioner to employ sparsely and unevenly sampled data to represent complex biological surfaces, including data acquired as a series of non-parallel image slices. The method inherently accommodates interpolation across irregular spans. In addition, shapes with arbitrary topology are easily represented without interpolation or aliasing errors arising from discrete sampling. To demonstrate the medical use of variational implicit surfaces, we present the reconstruction of the inner surfaces of blood vessels from a series of endovascular ultrasound images.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128412726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the CW-complex as a data structure for visualizing and controlling the topology of implicit surfaces. Previous methods for contolling the blending of implicit surfaces redefined the contribution of a metaball or unioned blended components. Morse theory provides new insight into the topology of the surface a function implicitly defines by studying the critical points of the function. These critical points are organized by a separatrix structure into a CW-complex. This CW-complex forms a topological skeleton of the object, indicating connectedness and the possibility of connectedness at various locations in the surface model. Definitions, algorithms and applications for the CW-complex of an implicit surface and the solid it bounds are given as a preliminary step toward direct control of the topology of an implicit surface.
{"title":"Using the CW-complex to represent the topological structure of implicit surfaces and solids","authors":"J. Hart","doi":"10.1145/1198555.1198643","DOIUrl":"https://doi.org/10.1145/1198555.1198643","url":null,"abstract":"We investigate the CW-complex as a data structure for visualizing and controlling the topology of implicit surfaces. Previous methods for contolling the blending of implicit surfaces redefined the contribution of a metaball or unioned blended components. Morse theory provides new insight into the topology of the surface a function implicitly defines by studying the critical points of the function. These critical points are organized by a separatrix structure into a CW-complex. This CW-complex forms a topological skeleton of the object, indicating connectedness and the possibility of connectedness at various locations in the surface model. Definitions, algorithms and applications for the CW-complex of an implicit surface and the solid it bounds are given as a preliminary step toward direct control of the topology of an implicit surface.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jörg Schmittler, Daniel Pohl, Tim Dahmen, C. Vogelgsang, P. Slusallek
Recently, realtime ray tracing has been developed to the point where it is becoming a possible alternative to the current rasterization approach for interactive 3D graphics. With the availability of a first prototype graphics board purely based on ray tracing, we have all the ingredients for a new generation of 3D graphics technology that could have significant consequences for computer gaming. However, hardly any research has been looking at how games could benefit from ray tracing.In this paper we describe our experience with two games: The adaption of a well known ego-shooter to a ray tracing engine and the development of a new game especially designed to exploit the features of ray tracing. We discuss how existing features of games can be implemented in a ray tracing context and what new effects and improvements are enabled by using ray tracing. Both projects show how ray tracing allows for highly realistic images while it greatly simplifies content creation.
{"title":"Realtime ray tracing for current and future games","authors":"Jörg Schmittler, Daniel Pohl, Tim Dahmen, C. Vogelgsang, P. Slusallek","doi":"10.1145/1198555.1198762","DOIUrl":"https://doi.org/10.1145/1198555.1198762","url":null,"abstract":"Recently, realtime ray tracing has been developed to the point where it is becoming a possible alternative to the current rasterization approach for interactive 3D graphics. With the availability of a first prototype graphics board purely based on ray tracing, we have all the ingredients for a new generation of 3D graphics technology that could have significant consequences for computer gaming. However, hardly any research has been looking at how games could benefit from ray tracing.In this paper we describe our experience with two games: The adaption of a well known ego-shooter to a ray tracing engine and the development of a new game especially designed to exploit the features of ray tracing. We discuss how existing features of games can be implemented in a ray tracing context and what new effects and improvements are enabled by using ray tracing. Both projects show how ray tracing allows for highly realistic images while it greatly simplifies content creation.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134262882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compactly supported RBFs in the management of implicit surfaces","authors":"T. Yoo","doi":"10.1145/1198555.1198644","DOIUrl":"https://doi.org/10.1145/1198555.1198644","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132345648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}