We propose a new approach for reconstructing a three-dimensional object from a single two-dimensional freehand line drawing depicting it. A sketch is essentially a noisy projection of a 3D object onto an arbitrary 2D plane. Reconstruction is the inverse projection of the sketched geometry from two dimensions back into three dimensions. While humans can do this reverse-projection remarkably easily and almost without being aware of it, this process is mathematically indeterminate and is very difficult to emulate computationally. Here we propose that the ability of humans to perceive a previously unseen 3D object from a single sketch is based on simple 2D-3D geometrical correlations that are learned from visual experience. We demonstrate how a simple correlation system that is exposed to many object-sketch pairs eventually learns to perform the inverse projection successfully for unseen objects. Conversely, we show how the same correlation data can be used to gauge the understandability of synthetically generated projections of given 3D objects. Using these principles we demonstrate for the first time a completely automatic conversion of a single freehand sketch into a physical solid object. These results have implications for bidirectional human-computer communication of 3D graphic concepts, and might also shed light on the human visual system.
{"title":"Correlation-based reconstruction of a 3D object from a single freehand sketch","authors":"Hod Lipson, M. Shpitalni","doi":"10.1145/1281500.1281555","DOIUrl":"https://doi.org/10.1145/1281500.1281555","url":null,"abstract":"We propose a new approach for reconstructing a three-dimensional object from a single two-dimensional freehand line drawing depicting it. A sketch is essentially a noisy projection of a 3D object onto an arbitrary 2D plane. Reconstruction is the inverse projection of the sketched geometry from two dimensions back into three dimensions. While humans can do this reverse-projection remarkably easily and almost without being aware of it, this process is mathematically indeterminate and is very difficult to emulate computationally. Here we propose that the ability of humans to perceive a previously unseen 3D object from a single sketch is based on simple 2D-3D geometrical correlations that are learned from visual experience. We demonstrate how a simple correlation system that is exposed to many object-sketch pairs eventually learns to perform the inverse projection successfully for unseen objects. Conversely, we show how the same correlation data can be used to gauge the understandability of synthetically generated projections of given 3D objects. Using these principles we demonstrate for the first time a completely automatic conversion of a single freehand sketch into a physical solid object. These results have implications for bidirectional human-computer communication of 3D graphic concepts, and might also shed light on the human visual system.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131009327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effects 10 (Sam Glassenberg): A review the Direct3D 10 Effects System -- a series of APIs to efficiently abstract and manage GPU device state, shaders, and constants. This talk covers the methods to reflect and manage material content as effect (.fx) files. [45 minutes]
{"title":"Effects 10","authors":"S. Glassenberg","doi":"10.1145/1281500.1281577","DOIUrl":"https://doi.org/10.1145/1281500.1281577","url":null,"abstract":"Effects 10 (Sam Glassenberg): A review the Direct3D 10 Effects System -- a series of APIs to efficiently abstract and manage GPU device state, shaders, and constants. This talk covers the methods to reflect and manage material content as effect (.fx) files. [45 minutes]","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128112012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this part of the course notes we introduce the fundamental concepts and algorithms for texture synthesis. The goal of this part is to enable the readers to start implementing texture synthesis algorithms and producing quick results instead of providing a comprehensive literature survey, so we will concentrate on state-of-art algorithms that strike the best balance between quality, speed, and simplicity. Fortunately, algorithms that work well in terms of quality and speed are often simple and elegant, making them easy to understand and implement.
{"title":"Part I: fundamentals","authors":"Li-Yi Wei","doi":"10.1145/1281500.1281612","DOIUrl":"https://doi.org/10.1145/1281500.1281612","url":null,"abstract":"In this part of the course notes we introduce the fundamental concepts and algorithms for texture synthesis. The goal of this part is to enable the readers to start implementing texture synthesis algorithms and producing quick results instead of providing a comprehensive literature survey, so we will concentrate on state-of-art algorithms that strike the best balance between quality, speed, and simplicity. Fortunately, algorithms that work well in terms of quality and speed are often simple and elegant, making them easy to understand and implement.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134403772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the last years real-time ray tracing has become an attractive alternative to rasterization based rendering, particularly for highly complex datasets including both surface and volume data. Ray tracing [7, 15] is a much more flexible rendering algorithm than triangle rasterization found in most of todays graphics cards. Employing it in a real-time context might at first sound a bit surprising as ray tracing is mostly known for its application in high-quality off-line image generation, as e.g. in the motion picture industry. Infamous for its long rendering times, ray tracing was not used for interactive purposes until recently [13, 14, 19]. What makes it attractive for massive model rendering is not only its simplicity and robustness, but especially its versatility.
{"title":"Massive model visualization using realtime ray tracing","authors":"Andreas Dietrich, P. Slusallek","doi":"10.1145/1281500.1281570","DOIUrl":"https://doi.org/10.1145/1281500.1281570","url":null,"abstract":"In the last years real-time ray tracing has become an attractive alternative to rasterization based rendering, particularly for highly complex datasets including both surface and volume data. Ray tracing [7, 15] is a much more flexible rendering algorithm than triangle rasterization found in most of todays graphics cards. Employing it in a real-time context might at first sound a bit surprising as ray tracing is mostly known for its application in high-quality off-line image generation, as e.g. in the motion picture industry. Infamous for its long rendering times, ray tracing was not used for interactive purposes until recently [13, 14, 19]. What makes it attractive for massive model rendering is not only its simplicity and robustness, but especially its versatility.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125635959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This tutorial discusses the Spatial Augmented Reality (SAR) concept, its advantages and limitations. It will present examples of state-of-the-art display configurations, appropriate real-time rendering techniques, details about hardware and software implementations, and current areas of application. Specifically, it will describe techniques for optical combination using single/multiple spatially aligned mirror-beam splitters, image sources, transparent screens and optical holograms. Furthermore, it presents techniques for projector-based augmentation of geometrically complex and textured display surfaces, and (along with optical combination) methods for achieving consistent illumination and occlusion effects. Emerging technologies that have the potential of enhancing future augmented reality displays will be surveyed.
{"title":"Modern approaches to augmented reality Video files associated with this course are available from the citation page","authors":"O. Bimber, R. Raskar","doi":"10.1145/1281500.1281628","DOIUrl":"https://doi.org/10.1145/1281500.1281628","url":null,"abstract":"This tutorial discusses the Spatial Augmented Reality (SAR) concept, its advantages and limitations. It will present examples of state-of-the-art display configurations, appropriate real-time rendering techniques, details about hardware and software implementations, and current areas of application. Specifically, it will describe techniques for optical combination using single/multiple spatially aligned mirror-beam splitters, image sources, transparent screens and optical holograms. Furthermore, it presents techniques for projector-based augmentation of geometrically complex and textured display surfaces, and (along with optical combination) methods for achieving consistent illumination and occlusion effects. Emerging technologies that have the potential of enhancing future augmented reality displays will be surveyed.","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This course provides an introduction to writing interactive computer graphics applications using the OpenGL Application Programming Interface (API). In addition to presenting the calls of the OpenGL library in the context of generating particular graphics effects, like lighting or texture mapping, the course makes extensive use of tutorial programs that allow students to interactively manipulate the parameters of the function calls and immediately see the affects on the rendered image. The course assumes no previous experience with OpenGL, merely the ability to read simple “C” programs. Topics range from an brief overview of the OpenGL libraries, to the rendering of simple geometric primitives, to geometric transformations and advanced features of OpenGL including lighting, texture mapping, anti-aliasing, and image processing. An Interactive Introduction to OpenGL Programming ii Speaker Biographies
{"title":"An interactive introduction to OpenGL programming","authors":"D. Shreiner, Edward Angel, Vicki Shreiner","doi":"10.1145/1281500.1281596","DOIUrl":"https://doi.org/10.1145/1281500.1281596","url":null,"abstract":"This course provides an introduction to writing interactive computer graphics applications using the OpenGL Application Programming Interface (API). In addition to presenting the calls of the OpenGL library in the context of generating particular graphics effects, like lighting or texture mapping, the course makes extensive use of tutorial programs that allow students to interactively manipulate the parameters of the function calls and immediately see the affects on the rendered image. The course assumes no previous experience with OpenGL, merely the ability to read simple “C” programs. Topics range from an brief overview of the OpenGL libraries, to the rendering of simple geometric primitives, to geometric transformations and advanced features of OpenGL including lighting, texture mapping, anti-aliasing, and image processing. An Interactive Introduction to OpenGL Programming ii Speaker Biographies","PeriodicalId":184610,"journal":{"name":"ACM SIGGRAPH 2007 courses","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116379503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}