We investigate the CW-complex as a data structure for visualizing and controlling the topology of implicit surfaces. Previous methods for contolling the blending of implicit surfaces redefined the contribution of a metaball or unioned blended components. Morse theory provides new insight into the topology of the surface a function implicitly defines by studying the critical points of the function. These critical points are organized by a separatrix structure into a CW-complex. This CW-complex forms a topological skeleton of the object, indicating connectedness and the possibility of connectedness at various locations in the surface model. Definitions, algorithms and applications for the CW-complex of an implicit surface and the solid it bounds are given as a preliminary step toward direct control of the topology of an implicit surface.
{"title":"Using the CW-complex to represent the topological structure of implicit surfaces and solids","authors":"J. Hart","doi":"10.1145/1198555.1198643","DOIUrl":"https://doi.org/10.1145/1198555.1198643","url":null,"abstract":"We investigate the CW-complex as a data structure for visualizing and controlling the topology of implicit surfaces. Previous methods for contolling the blending of implicit surfaces redefined the contribution of a metaball or unioned blended components. Morse theory provides new insight into the topology of the surface a function implicitly defines by studying the critical points of the function. These critical points are organized by a separatrix structure into a CW-complex. This CW-complex forms a topological skeleton of the object, indicating connectedness and the possibility of connectedness at various locations in the surface model. Definitions, algorithms and applications for the CW-complex of an implicit surface and the solid it bounds are given as a preliminary step toward direct control of the topology of an implicit surface.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High dynamic range imaging and image-based lighting","authors":"E. Reinhard, P. Debevec, G. Ward, S. Pattanaik","doi":"10.1145/1198555.1198707","DOIUrl":"https://doi.org/10.1145/1198555.1198707","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127431258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Digital face cloning","authors":"","doi":"10.1145/3245703","DOIUrl":"https://doi.org/10.1145/3245703","url":null,"abstract":"","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"276 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127549137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.
{"title":"Assorted pixels: multi-sampled imaging with structural models","authors":"S. Nayar, S. Narasimhan","doi":"10.1145/1198555.1198563","DOIUrl":"https://doi.org/10.1145/1198555.1198563","url":null,"abstract":"Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"os-16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127762740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Pulli, Jani Vaarala, Ville Miettinen, T. Aarnio, M. Callow
Mobile phones offer exciting new opportunities for graphics application developers. However, they also have sig-nificant limitations compared to traditional desktop graphics environments, including absence of dedicated graphics hardware, limited memory (both RAM and ROM), limited communications bandwidth, and lack of floating point hardware. Existing graphics APIs ignore these limitations and thus are infeasible to implement in embedded devices. Thiscourse presents two new 3D graphics APIs that address the special needs and constraints of mobile/embedded platforms: OpenGL ES and M3G. OpenGL ES is a light-weight version of the well-known workstation standard, offering a subset of OpenGL 1.5 capability plus support for fixed point arithmetic. M3G, Mobile 3D Graphics API for Java MIDP (Mobile Information Device Profile), also known as JSR-184, provides scene graph and animation support, binary file format, and immediate mode rendering that bypasses scene graphs. These APIs provide powerful graphics capabilities in a form that fits well on today’s devices, and will support hardware acceleration in the future. The course begins with a discussion of the target environments and their limitations, and general techniques for coping with platform/environment constraints (such as fixed point arithmetic). This is followed by detailed presen-tations of the APIs. For each API, we describe the included functionality and compare it to related workstation standards, explaining what was left out and why. We also discuss practical aspects of working with the APIs on the target platforms, and present strategies for porting existing applications and creating new ones. Categories and Subject Descriptors (according to ACM CCS) : I.3.6 [Computer Graphics]: Standards
{"title":"Developing mobile 3D applications with OpenGL ES and M3G","authors":"K. Pulli, Jani Vaarala, Ville Miettinen, T. Aarnio, M. Callow","doi":"10.1145/1198555.1198730","DOIUrl":"https://doi.org/10.1145/1198555.1198730","url":null,"abstract":"Mobile phones offer exciting new opportunities for graphics application developers. However, they also have sig-nificant limitations compared to traditional desktop graphics environments, including absence of dedicated graphics hardware, limited memory (both RAM and ROM), limited communications bandwidth, and lack of floating point hardware. Existing graphics APIs ignore these limitations and thus are infeasible to implement in embedded devices. Thiscourse presents two new 3D graphics APIs that address the special needs and constraints of mobile/embedded platforms: OpenGL ES and M3G. OpenGL ES is a light-weight version of the well-known workstation standard, offering a subset of OpenGL 1.5 capability plus support for fixed point arithmetic. M3G, Mobile 3D Graphics API for Java MIDP (Mobile Information Device Profile), also known as JSR-184, provides scene graph and animation support, binary file format, and immediate mode rendering that bypasses scene graphs. These APIs provide powerful graphics capabilities in a form that fits well on today’s devices, and will support hardware acceleration in the future. The course begins with a discussion of the target environments and their limitations, and general techniques for coping with platform/environment constraints (such as fixed point arithmetic). This is followed by detailed presen-tations of the APIs. For each API, we describe the included functionality and compare it to related workstation standards, explaining what was left out and why. We also discuss practical aspects of working with the APIs on the target platforms, and present strategies for porting existing applications and creating new ones. Categories and Subject Descriptors (according to ACM CCS) : I.3.6 [Computer Graphics]: Standards","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129187969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography. The computational techniques encompass methods from modification of imaging parameters during capture to sophisticated reconstructions from indirect measurements. We provide a practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples. Many ideas in computational photography are still relatively new to digital artists and programmers and there is no upto-date reference text. A larger problem is that a multi-disciplinary field that combines ideas from computational methods and modern digital photography involves a steep learning curve. For example, photographers are not always familiar with advanced algorithms now emerging to capture high dynamic range images, but image processing researchers face difficulty in understanding the capture and noise issues in digital cameras. These topics, however, can be easily learned without extensive background. The goal of this STAR is to present both aspects in a compact form. The new capture methods include sophisticated sensors, electromechanical actuators and on-board processing. Examples include adaptation to sensed scene depth and illumination, taking multiple pictures by varying camera parameters or actively modifying the flash illumination parameters. A class of modern reconstruction methods is also emerging. The methods can achieve a ‘photomontage’ by optimally fusing information from multiple images, improve signal to noise ratio and extract scene features such as depth edges. The STAR briefly reviews fundamental topics in digital imaging and then provides a practical guide to underlying techniques beyond image processing such as gradient domain operations, graph cuts, bilateral filters and optimizations. The participants learn about topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples. We hope to provide enough fundamentals to satisfy the technical specialist without intimidating the curious graphics researcher interested in recent advances in photography. The intended audience is photographers, digital artists, image processing programmers and vision researchers using or building applications for digital cameras or images. They will learn about camera fundamentals and powerful computational tools, along with many real world examples.
{"title":"Computational photography","authors":"Ramesh Raskar, J. Tumblin","doi":"10.1145/1198555.1198561","DOIUrl":"https://doi.org/10.1145/1198555.1198561","url":null,"abstract":"Computational photography combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography. The computational techniques encompass methods from modification of imaging parameters during capture to sophisticated reconstructions from indirect measurements. We provide a practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples. Many ideas in computational photography are still relatively new to digital artists and programmers and there is no upto-date reference text. A larger problem is that a multi-disciplinary field that combines ideas from computational methods and modern digital photography involves a steep learning curve. For example, photographers are not always familiar with advanced algorithms now emerging to capture high dynamic range images, but image processing researchers face difficulty in understanding the capture and noise issues in digital cameras. These topics, however, can be easily learned without extensive background. The goal of this STAR is to present both aspects in a compact form. The new capture methods include sophisticated sensors, electromechanical actuators and on-board processing. Examples include adaptation to sensed scene depth and illumination, taking multiple pictures by varying camera parameters or actively modifying the flash illumination parameters. A class of modern reconstruction methods is also emerging. The methods can achieve a ‘photomontage’ by optimally fusing information from multiple images, improve signal to noise ratio and extract scene features such as depth edges. The STAR briefly reviews fundamental topics in digital imaging and then provides a practical guide to underlying techniques beyond image processing such as gradient domain operations, graph cuts, bilateral filters and optimizations. The participants learn about topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples. We hope to provide enough fundamentals to satisfy the technical specialist without intimidating the curious graphics researcher interested in recent advances in photography. The intended audience is photographers, digital artists, image processing programmers and vision researchers using or building applications for digital cameras or images. They will learn about camera fundamentals and powerful computational tools, along with many real world examples.","PeriodicalId":192758,"journal":{"name":"ACM SIGGRAPH 2005 Courses","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}