Human-computer interaction using large-format displays is an active area of research that focuses on how humans can better work with computers or other machines. In order for this to happen, there must be an enabling technology that creates the interface between man and machine. Touch capabilities in a large-format display would be advantageous as a large display area is informationally dense and touch provides a natural, life-size interface to that information. This paper describes a new enabling technology in the form of a camera-based man-machine input device which uses smart cameras to analyze a scene directly in front of a large-format computer display. The analysis determines where a user has touched the display, and then treats that information as a mouse click, thereby controlling the computer. Significant technological problems have been overcome to make the system robust enough for commercialization. The paper also describes camera-based system architecture and presents some interesting advantages as well as new capabilities. The technology is ideally suited to large-format computer displays, thus creating a very natural interface with familiar usage paradigms for human-computer interaction.
{"title":"A CMOS camera-based man-machine input device for large-format interactive displays","authors":"Gerald D. Morrison","doi":"10.1145/1281500.1281686","DOIUrl":"https://doi.org/10.1145/1281500.1281686","url":null,"abstract":"Human-computer interaction using large-format displays is an active area of research that focuses on how humans can better work with computers or other machines. In order for this to happen, there must be an enabling technology that creates the interface between man and machine. Touch capabilities in a large-format display would be advantageous as a large display area is informationally dense and touch provides a natural, life-size interface to that information. This paper describes a new enabling technology in the form of a camera-based man-machine input device which uses smart cameras to analyze a scene directly in front of a large-format computer display. The analysis determines where a user has touched the display, and then treats that information as a mouse click, thereby controlling the computer. Significant technological problems have been overcome to make the system robust enough for commercialization. The paper also describes camera-based system architecture and presents some interesting advantages as well as new capabilities. The technology is ideally suited to large-format computer displays, thus creating a very natural interface with familiar usage paradigms for human-computer interaction.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114323377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A typical texture synthesis algorithm takes as input a small example image and produces a much larger image resembling it within a few minutes. Such algorithms are extremely useful as off-line texture generators. However, once the texture is synthesized there is no other option than treating it as a regular image: It must be stored on the disk and loaded into the graphics hardware memory for rendering.
{"title":"Part IV: runtime texture synthesis","authors":"S. Lefebvre","doi":"10.1145/1281500.1281615","DOIUrl":"https://doi.org/10.1145/1281500.1281615","url":null,"abstract":"A typical texture synthesis algorithm takes as input a small example image and produces a much larger image resembling it within a few minutes. Such algorithms are extremely useful as off-line texture generators. However, once the texture is synthesized there is no other option than treating it as a regular image: It must be stored on the disk and loaded into the graphics hardware memory for rendering.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"353 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122021665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalya Tatarchuk, Jeremy Shopf, Christopher DeCoro
Figure 1. We show the result of extracting a series of highly detailed isosurfaces at interactive rates. Our system implements a hybrid cubes-tetrahedra method, which leverages the strengths of each as applicable to the unique architecture of the GPU. The left pair of images (wireframe and shaded, using a base cube grid of 64) show only an extracted isosurface, while the right pair displays an alternate isosurface overlayed with a volume rendering.
{"title":"Real-Time Isosurface Extraction Using the GPU Programmable Geometry Pipeline","authors":"Natalya Tatarchuk, Jeremy Shopf, Christopher DeCoro","doi":"10.1145/1281500.1361219","DOIUrl":"https://doi.org/10.1145/1281500.1361219","url":null,"abstract":"Figure 1. We show the result of extracting a series of highly detailed isosurfaces at interactive rates. Our system implements a hybrid cubes-tetrahedra method, which leverages the strengths of each as applicable to the unique architecture of the GPU. The left pair of images (wireframe and shaded, using a base cube grid of 64) show only an extracted isosurface, while the right pair displays an alternate isosurface overlayed with a volume rendering.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129384567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.
{"title":"Assorted pixels: multi-sampled imaging with structural models","authors":"S. Nayar, S. Narasimhan","doi":"10.1145/1281500.1281504","DOIUrl":"https://doi.org/10.1145/1281500.1281504","url":null,"abstract":"Multi-sampled imaging is a general framework for using pixels on an image detector to simultaneously sample multiple dimensions of imaging (space, time, spectrum, brightness, polarization, etc.). The mosaic of red, green and blue spectral filters found in most solid-state color cameras is one example of multi-sampled imaging. We briefly describe how multi-sampling can be used to explore other dimensions of imaging. Once such an image is captured, smooth reconstructions along the individual dimensions can be obtained using standard interpolation algorithms. Typically, this results in a substantial reduction of resolution (and hence image quality). One can extract significantly greater resolution in each dimension by noting that the light fields associated with real scenes have enormous redundancies within them, causing different dimensions to be highly correlated. Hence, multi-sampled images can be better interpolated using local structural models that are learned offline from a diverse set of training images. The specific type of structural models we use are based on polynomial functions of measured image intensities. They are very effective as well as computationally efficient. We demonstrate the benefits of structural interpolation using three specific applications. These are (a) traditional color imaging with a mosaic of color filters, (b) high dynamic range monochrome imaging using a mosaic of exposure filters, and (c) high dynamic range color imaging using a mosaic of overlapping color and exposure filters.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126539329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ray tracing and photon mapping provide a practical way of efficiently simulating global illumination including interreflections, caustics, color bleeding, participating media and subsurface scattering in scenes with complicated geometry and advanced material models. This halfday course will provide the insight necessary to efficiently implement and use ray tracing and photon mapping to simulate global illumination in complex scenes. The presentation will cover the fundamentals of ray tracing and photon mapping including efficient techniques and data-structures for managing large numbers of rays and photons. In addition, we will describe how to integrate the information from the photon maps in shading algorithms to render global illumination effects such as caustics, color bleeding, participating media, subsurface scattering, and motion blur. Finally, we will describe recent advances for dealing with highly complex movie scenes as well as recent work on realtime ray tracing and photon mapping.
{"title":"High quality rendering using ray tracing and photon mapping","authors":"H. Jensen, Per H. Christensen","doi":"10.1145/1281500.1281593","DOIUrl":"https://doi.org/10.1145/1281500.1281593","url":null,"abstract":"Ray tracing and photon mapping provide a practical way of efficiently simulating global illumination including interreflections, caustics, color bleeding, participating media and subsurface scattering in scenes with complicated geometry and advanced material models. This halfday course will provide the insight necessary to efficiently implement and use ray tracing and photon mapping to simulate global illumination in complex scenes. The presentation will cover the fundamentals of ray tracing and photon mapping including efficient techniques and data-structures for managing large numbers of rays and photons. In addition, we will describe how to integrate the information from the photon maps in shading algorithms to render global illumination effects such as caustics, color bleeding, participating media, subsurface scattering, and motion blur. Finally, we will describe recent advances for dealing with highly complex movie scenes as well as recent work on realtime ray tracing and photon mapping.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133589747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}