{"title":"RTX accelerated ray tracing with OptiX 7","authors":"I. Wald, S. Parker","doi":"10.1145/3388769.3407532","DOIUrl":"https://doi.org/10.1145/3388769.3407532","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133163696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Story (content) is not just the domain of directors and producers... anymore. Today, it is as important for technical directors, animators, VFX creators and interactive designers whose work is essential in making "the story" come to life. This information is particularly useful when communicating with screenwriters, directors, and producers. This course answers the question, "What is Story?" (and you don't even have to take a course in screenwriting). Knowing the basics of story enables the crew to become collaborators with the producer and director. A director may talk about their story goals; and the crew will know what specific story benchmarks they are trying to meet. This information builds from a story being more than "a sequence of events (acts) but can become a dramatic story that that builds from setup through resolution. Having an understanding of story structure allows one to understand a story's elements in context (i.e., theme, character, setting, conflict etc.) and their relationship to classic story structure (i.e., setup, inciting incident, rising action, climax, resolution, etc.). This information is for all whose work makes the story better, but their job is not creating the story. The following course notes are adapted from Story Structure and Development: A Guide for Animators, VFX Artists, Game Designers, and Virtual Reality Creators. CRC Publishers, a division of Taylor and Francis. Available on Amazon.
{"title":"What we talk about, when we talk about story","authors":"C. Caldwell","doi":"10.1145/3388769.3407548","DOIUrl":"https://doi.org/10.1145/3388769.3407548","url":null,"abstract":"Story (content) is not just the domain of directors and producers... anymore. Today, it is as important for technical directors, animators, VFX creators and interactive designers whose work is essential in making \"the story\" come to life. This information is particularly useful when communicating with screenwriters, directors, and producers. This course answers the question, \"What is Story?\" (and you don't even have to take a course in screenwriting). Knowing the basics of story enables the crew to become collaborators with the producer and director. A director may talk about their story goals; and the crew will know what specific story benchmarks they are trying to meet. This information builds from a story being more than \"a sequence of events (acts) but can become a dramatic story that that builds from setup through resolution. Having an understanding of story structure allows one to understand a story's elements in context (i.e., theme, character, setting, conflict etc.) and their relationship to classic story structure (i.e., setup, inciting incident, rising action, climax, resolution, etc.). This information is for all whose work makes the story better, but their job is not creating the story. The following course notes are adapted from Story Structure and Development: A Guide for Animators, VFX Artists, Game Designers, and Virtual Reality Creators. CRC Publishers, a division of Taylor and Francis. Available on Amazon.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127634709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David B. Lindell, Matthew O'Toole, S. Narasimhan, R. Raskar
Emerging detector technologies are capable of ultrafast capture of single photons, enabling imaging at the speed of light. Not only can these detectors be used for imaging at essentially trillion frame-per-second rates, but combining them with computational algorithms has given rise to unprecedented new imaging capabilities. Computational time-resolved imaging has enabled new techniques for 3D imaging, light transport analysis, imaging around corners or behind occluders, and imaging through scattering media such as fog, murky water, or human tissue. With applications in autonomous navigation, robotic vision, human-computer interaction, and more, this is an area of rapidly growing interest. In this course, we provide an introduction to computational time-resolved imaging and single photon sensing with a focus on hardware, applications, and algorithms. We describe various types of emerging single-photon detectors, including single-photon avalanche diodes and avalanche photodiodes, which are among the most popular time-resolved detectors. Physically accurate models for these detectors are described, including modeling parameters and noise statistics used in most computational algorithms. From the application side, we discuss the use of ultrafast active illumination for 3D imaging and transient imaging, and we describe the state of the art in non-line-of-sight imaging, which requires modelling and inverting the propagation and scattering of light from a visible surface to a hidden object and back. We describe time-resolved computational algorithms used in each of these applications and offer insights on potential future directions.
{"title":"Computational time-resolved imaging, single-photon sensing, and non-line-of-sight imaging","authors":"David B. Lindell, Matthew O'Toole, S. Narasimhan, R. Raskar","doi":"10.1145/3388769.3407481","DOIUrl":"https://doi.org/10.1145/3388769.3407481","url":null,"abstract":"Emerging detector technologies are capable of ultrafast capture of single photons, enabling imaging at the speed of light. Not only can these detectors be used for imaging at essentially trillion frame-per-second rates, but combining them with computational algorithms has given rise to unprecedented new imaging capabilities. Computational time-resolved imaging has enabled new techniques for 3D imaging, light transport analysis, imaging around corners or behind occluders, and imaging through scattering media such as fog, murky water, or human tissue. With applications in autonomous navigation, robotic vision, human-computer interaction, and more, this is an area of rapidly growing interest. In this course, we provide an introduction to computational time-resolved imaging and single photon sensing with a focus on hardware, applications, and algorithms. We describe various types of emerging single-photon detectors, including single-photon avalanche diodes and avalanche photodiodes, which are among the most popular time-resolved detectors. Physically accurate models for these detectors are described, including modeling parameters and noise statistics used in most computational algorithms. From the application side, we discuss the use of ultrafast active illumination for 3D imaging and transient imaging, and we describe the state of the art in non-line-of-sight imaging, which requires modelling and inverting the propagation and scattering of light from a visible surface to a hidden object and back. We describe time-resolved computational algorithms used in each of these applications and offer insights on potential future directions.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simulating dynamic deformation has been an integral component of Pixar's storytelling since Boo's shirt in Monsters, Inc. (2001). Recently, several key transformations have been applied to Pixar's core simulator Fizt that improve its speed, robustness, and generality. Starting with Coco (2017), improved collision detection and response were incorporated into the cloth solver, then with Cars 3 (2017) 3D solids were introduced, and in Onward (2020) clothing is allowed to interact with a character's body with two-way coupling. The 3D solids are based on a fast, compact, and powerful new formulation that we have published over the last few years at SIGGRAPH. Under this formulation, the construction and eigendecomposition of the force gradient, long considered the most onerous part of the implementation, becomes fast and simple. We provide a detailed, self-contained, and unified treatment here that is not available in the technical papers. This new formulation is only a starting point for creating a simulator that is up challenges of a production environment. One challenge is performance: we discuss our current best practices for accelerating system assembly and solver performance. Another challenge that requires considerable attention is robust collision detection and response. Much has been written about collision detection approaches such as proximity-queries, continuous collisions and global intersection analysis. We discuss our strategies for using these techniques, which provides us with valuable information that is needed to handle challenging scenarios.
{"title":"Dynamic deformables: implementation and production practicalities","authors":"Theodore Kim, D. Eberle","doi":"10.1145/3388769.3407490","DOIUrl":"https://doi.org/10.1145/3388769.3407490","url":null,"abstract":"Simulating dynamic deformation has been an integral component of Pixar's storytelling since Boo's shirt in Monsters, Inc. (2001). Recently, several key transformations have been applied to Pixar's core simulator Fizt that improve its speed, robustness, and generality. Starting with Coco (2017), improved collision detection and response were incorporated into the cloth solver, then with Cars 3 (2017) 3D solids were introduced, and in Onward (2020) clothing is allowed to interact with a character's body with two-way coupling. The 3D solids are based on a fast, compact, and powerful new formulation that we have published over the last few years at SIGGRAPH. Under this formulation, the construction and eigendecomposition of the force gradient, long considered the most onerous part of the implementation, becomes fast and simple. We provide a detailed, self-contained, and unified treatment here that is not available in the technical papers. This new formulation is only a starting point for creating a simulator that is up challenges of a production environment. One challenge is performance: we discuss our current best practices for accelerating system assembly and solver performance. Another challenge that requires considerable attention is robust collision detection and response. Much has been written about collision detection approaches such as proximity-queries, continuous collisions and global intersection analysis. We discuss our strategies for using these techniques, which provides us with valuable information that is needed to handle challenging scenarios.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"2005 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Ventura, S. Zollmann, S. Stannus, M. Billinghurst, Remi Driancourt
{"title":"Understanding AR inside and out --- Part Two: expanding out and into the world","authors":"Jonathan Ventura, S. Zollmann, S. Stannus, M. Billinghurst, Remi Driancourt","doi":"10.1145/3388769.3407543","DOIUrl":"https://doi.org/10.1145/3388769.3407543","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114639909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Pintore, Claudio Mura, F. Ganovelli, Lizeth Joseline Fuentes Perez, R. Pajarola, Enrico Gobbetti
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this tutorial, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.
{"title":"Automatic 3D reconstruction of structured indoor environments","authors":"G. Pintore, Claudio Mura, F. Ganovelli, Lizeth Joseline Fuentes Perez, R. Pajarola, Enrico Gobbetti","doi":"10.1145/3388769.3407469","DOIUrl":"https://doi.org/10.1145/3388769.3407469","url":null,"abstract":"Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this tutorial, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115645402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hill, S. McAuley, Laurent Belcour, Will Earl, Niklas Harrysson, Sébastien Hillaire, Naty Hoffman, Lee Kerley, Jasmin Patry, Rob Pieké, Igor Skliar, J. Stone, Pascal Barla, Mégane Bati, Iliyan Georgiev
Some Thoughts on the Fresnel Term Naty Hoffman The Fresnel term would appear to be the best-understood part of the microfacet model --- one can simply use the original equations, or an approximation (usually Schlick's) if computation cost is at a premium. However, in this talk, we will show that all is not as it seems, and that even the humble Fresnel term can hold some surprises. This talk builds on a previous presentation at the 2019 Eurographics Workshop on Material Appearance Modeling [Hof19], extending it into a more comprehensive overview.
{"title":"Physically based shading in theory and practice","authors":"S. Hill, S. McAuley, Laurent Belcour, Will Earl, Niklas Harrysson, Sébastien Hillaire, Naty Hoffman, Lee Kerley, Jasmin Patry, Rob Pieké, Igor Skliar, J. Stone, Pascal Barla, Mégane Bati, Iliyan Georgiev","doi":"10.1145/3388769.3407523","DOIUrl":"https://doi.org/10.1145/3388769.3407523","url":null,"abstract":"Some Thoughts on the Fresnel Term Naty Hoffman The Fresnel term would appear to be the best-understood part of the microfacet model --- one can simply use the original equations, or an approximation (usually Schlick's) if computation cost is at a premium. However, in this talk, we will show that all is not as it seems, and that even the humble Fresnel term can hold some surprises. This talk builds on a previous presentation at the 2019 Eurographics Workshop on Material Appearance Modeling [Hof19], extending it into a more comprehensive overview.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Troubleshooting and cleanup techniques for 3D printing","authors":"Lance Winkel","doi":"10.1145/3388769.3407538","DOIUrl":"https://doi.org/10.1145/3388769.3407538","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122983391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is important to recognize that many of the words, images, sounds, objects, and technologies presented at SIGGRAPH are protected by copyrights or patents. They are owned by the people who created them. Please respect their intellectual-property rights by refraining from making recordings from your device or taking screenshots. If you are interested in the content, feel free to reach out to the contributor or visit the ACM SIGGRAPH Digital library after the event, where the proceedings will be made available.
{"title":"Fundamentals of color science","authors":"J. Ferwerda, Chester F. Carlson","doi":"10.1145/3388769.3407479","DOIUrl":"https://doi.org/10.1145/3388769.3407479","url":null,"abstract":"It is important to recognize that many of the words, images, sounds, objects, and technologies presented at SIGGRAPH are protected by copyrights or patents. They are owned by the people who created them. Please respect their intellectual-property rights by refraining from making recordings from your device or taking screenshots. If you are interested in the content, feel free to reach out to the contributor or visit the ACM SIGGRAPH Digital library after the event, where the proceedings will be made available.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122289348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}