The problem of seeing around corners, often referred in the broader "Non-Line-of-Sight" context, is to use sensed information from directly visible surfaces of an environment to infer properties of the scene not directly visible. For example, the geometry above presents a classic "around the corner" setting, where a flat wall is used as the visible surface, and the hidden scene is occluded by another wall. While many proposed sensing modalities have been proposed, including acoustic and RF signals, most approaches utilize photonic sensors in the visible spectrum due to the availability of hardware, and better temporal and spatial resolution. Approaches range from active time-resolved measurements, time-averaged continuous wave sources, and even to passive exploitation of ambient illumination.
{"title":"Seeing around corners using time of flight","authors":"R. Raskar, A. Velten, S. Bauer, Tristan Swedish","doi":"10.1145/3388769.3407534","DOIUrl":"https://doi.org/10.1145/3388769.3407534","url":null,"abstract":"The problem of seeing around corners, often referred in the broader \"Non-Line-of-Sight\" context, is to use sensed information from directly visible surfaces of an environment to infer properties of the scene not directly visible. For example, the geometry above presents a classic \"around the corner\" setting, where a flat wall is used as the visible surface, and the hidden scene is occluded by another wall. While many proposed sensing modalities have been proposed, including acoustic and RF signals, most approaches utilize photonic sensors in the visible spectrum due to the availability of hardware, and better temporal and spatial resolution. Approaches range from active time-resolved measurements, time-averaged continuous wave sources, and even to passive exploitation of ambient illumination.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122123701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ariel Shamir, N. Mitra, Nobuyuki Umetani, Yuki Koyama
In recent years, much research has been dedicated to the development of "intelligent tools" that can assist both professionals as well as novices in the process of creation. Using the computational power of the machine, and involving advanced techniques, the tools handle complex and tedious tasks that were difficult or even impossible for humans, thereby freeing the human creator of many constraints and allowing her to concentrate on the creative process, while ensuring high-quality and valid design. This course is aimed at presenting some of the key technologies used to assist interactive creative processes. The course allows researchers and practitioners to understand these techniques more deeply, and possibly inspire them to research this subject and create intelligent tools themselves. More specifically, the course will concentrate on four main enabling technologies: geometric reasoning, physical constraints, data-driven techniques and machine learning, and crowdsourcing. In each of these areas the course will survey several recent papers and works and provide examples of using these in the creation of a variety of outputs: 3D models, animations, images, videos and more.
{"title":"Intelligent tools for creative graphics","authors":"Ariel Shamir, N. Mitra, Nobuyuki Umetani, Yuki Koyama","doi":"10.1145/3388769.3407498","DOIUrl":"https://doi.org/10.1145/3388769.3407498","url":null,"abstract":"In recent years, much research has been dedicated to the development of \"intelligent tools\" that can assist both professionals as well as novices in the process of creation. Using the computational power of the machine, and involving advanced techniques, the tools handle complex and tedious tasks that were difficult or even impossible for humans, thereby freeing the human creator of many constraints and allowing her to concentrate on the creative process, while ensuring high-quality and valid design. This course is aimed at presenting some of the key technologies used to assist interactive creative processes. The course allows researchers and practitioners to understand these techniques more deeply, and possibly inspire them to research this subject and create intelligent tools themselves. More specifically, the course will concentrate on four main enabling technologies: geometric reasoning, physical constraints, data-driven techniques and machine learning, and crowdsourcing. In each of these areas the course will survey several recent papers and works and provide examples of using these in the creation of a variety of outputs: 3D models, animations, images, videos and more.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physics-based rendering algorithms generate photorealistic images by simulating the flow of light through a detailed mathematical representation of a virtual scene. In contrast, physics-based differentiable rendering algorithms focus on computing derivative of images exhibiting complex light transport effects (e.g., soft shadows, interreflection, and caustics) with respect to arbitrary scene parameters such as camera pose, object geometry (e.g., vertex positions) as well as spatially varying material properties expressed as 2D textures and 3D volumes. This new level of generality has made physics-based differentiable rendering a key ingredient for solving many challenging inverse-rendering problems, that is, the search of scene configurations optimizing user-specified objective functions, using gradient-based methods (as illustrated in the figure below). Further, these techniques can be incorporated into probabilistic inference and machine learning pipelines. For instance, differentiable renderers allow "rendering losses" to be computed with complex light transport effects captured. Additionally, they can be used as generative models that synthesize photorealistic images.
{"title":"Physics-based differentiable rendering: from theory to implementation","authors":"Shuang Zhao, Wenzel Jakob, Tzu-Mao Li","doi":"10.1145/3388769.3407454","DOIUrl":"https://doi.org/10.1145/3388769.3407454","url":null,"abstract":"Physics-based rendering algorithms generate photorealistic images by simulating the flow of light through a detailed mathematical representation of a virtual scene. In contrast, physics-based differentiable rendering algorithms focus on computing derivative of images exhibiting complex light transport effects (e.g., soft shadows, interreflection, and caustics) with respect to arbitrary scene parameters such as camera pose, object geometry (e.g., vertex positions) as well as spatially varying material properties expressed as 2D textures and 3D volumes. This new level of generality has made physics-based differentiable rendering a key ingredient for solving many challenging inverse-rendering problems, that is, the search of scene configurations optimizing user-specified objective functions, using gradient-based methods (as illustrated in the figure below). Further, these techniques can be incorporated into probabilistic inference and machine learning pipelines. For instance, differentiable renderers allow \"rendering losses\" to be computed with complex light transport effects captured. Additionally, they can be used as generative models that synthesize photorealistic images.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kalina Borkiewicz, A. Christensen, R. Wyatt, E. Wright
The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term "Renaissance Team", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.
{"title":"Introduction to cinematic scientific visualization","authors":"Kalina Borkiewicz, A. Christensen, R. Wyatt, E. Wright","doi":"10.1145/3388769.3407502","DOIUrl":"https://doi.org/10.1145/3388769.3407502","url":null,"abstract":"The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term \"Renaissance Team\", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifan Peng, A. Veeraraghavan, W. Heidrich, Gordon Wetzstein
Application-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.
{"title":"Deep optics: joint design of optics and image recovery algorithms for domain specific cameras","authors":"Yifan Peng, A. Veeraraghavan, W. Heidrich, Gordon Wetzstein","doi":"10.1145/3388769.3407486","DOIUrl":"https://doi.org/10.1145/3388769.3407486","url":null,"abstract":"Application-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116142395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color basics for digital media and visualization","authors":"T. Rhyne","doi":"10.1145/3388769.3407478","DOIUrl":"https://doi.org/10.1145/3388769.3407478","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130901017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Barker, Sam Martin, R. Guy, Jose-Emilio Munoz-Lopez, Arseny Kapoulkine, Kay Chang
{"title":"Moving mobile graphics","authors":"J. Barker, Sam Martin, R. Guy, Jose-Emilio Munoz-Lopez, Arseny Kapoulkine, Kay Chang","doi":"10.1145/3388769.3407515","DOIUrl":"https://doi.org/10.1145/3388769.3407515","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Keller, Pascal Grittmann, J. Vorba, Iliyan Georgiev, M. Sik, Eugene d'Eon, Pascal Gautron, Petr Vévoda, Ivo Kondapaneni
Jaroslav Křivánek's research aimed at finding the one robust and efficient light transport simulation algorithm that would handle any given scene with any complexity of transport. He had a clear and unique vision of how to reach this ambitious goal. On his way, he created an impressive track of significant research contributions. In this course, his collaborators will tell the story of Jaroslav's quest for that "one" algorithm and discuss his impact and legacy.
Jaroslav Křivánek的研究旨在找到一种强大而有效的光传输模拟算法,该算法可以处理任何给定场景的任何传输复杂性。他对如何实现这一雄心勃勃的目标有着清晰而独特的见解。在他的道路上,他创造了一系列令人印象深刻的重大研究贡献。在本课程中,他的合作者将讲述Jaroslav对“一”算法的追求,并讨论他的影响和遗产。
{"title":"Advances in Monte Carlo rendering: the legacy of Jaroslav Křivánek","authors":"A. Keller, Pascal Grittmann, J. Vorba, Iliyan Georgiev, M. Sik, Eugene d'Eon, Pascal Gautron, Petr Vévoda, Ivo Kondapaneni","doi":"10.1145/3388769.3407458","DOIUrl":"https://doi.org/10.1145/3388769.3407458","url":null,"abstract":"Jaroslav Křivánek's research aimed at finding the one robust and efficient light transport simulation algorithm that would handle any given scene with any complexity of transport. He had a clear and unique vision of how to reach this ambitious goal. On his way, he created an impressive track of significant research contributions. In this course, his collaborators will tell the story of Jaroslav's quest for that \"one\" algorithm and discuss his impact and legacy.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}