首页 > 最新文献

ACM SIGGRAPH 2020 Courses最新文献

英文 中文
Seeing around corners using time of flight 利用飞行时间观察转角
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407534
R. Raskar, A. Velten, S. Bauer, Tristan Swedish
The problem of seeing around corners, often referred in the broader "Non-Line-of-Sight" context, is to use sensed information from directly visible surfaces of an environment to infer properties of the scene not directly visible. For example, the geometry above presents a classic "around the corner" setting, where a flat wall is used as the visible surface, and the hidden scene is occluded by another wall. While many proposed sensing modalities have been proposed, including acoustic and RF signals, most approaches utilize photonic sensors in the visible spectrum due to the availability of hardware, and better temporal and spatial resolution. Approaches range from active time-resolved measurements, time-averaged continuous wave sources, and even to passive exploitation of ambient illumination.
在更广泛的“非视线”背景下,看到拐角的问题是使用来自环境直接可见表面的感知信息来推断不直接可见的场景属性。例如,上面的几何图形呈现了一个经典的“转角”设置,其中平坦的墙壁用作可见表面,隐藏的场景被另一堵墙遮挡。虽然已经提出了许多提议的传感方式,包括声学和射频信号,但由于硬件的可用性以及更好的时间和空间分辨率,大多数方法利用可见光光谱中的光子传感器。方法包括主动时间分辨测量,时间平均连续波源,甚至被动利用环境照明。
{"title":"Seeing around corners using time of flight","authors":"R. Raskar, A. Velten, S. Bauer, Tristan Swedish","doi":"10.1145/3388769.3407534","DOIUrl":"https://doi.org/10.1145/3388769.3407534","url":null,"abstract":"The problem of seeing around corners, often referred in the broader \"Non-Line-of-Sight\" context, is to use sensed information from directly visible surfaces of an environment to infer properties of the scene not directly visible. For example, the geometry above presents a classic \"around the corner\" setting, where a flat wall is used as the visible surface, and the hidden scene is occluded by another wall. While many proposed sensing modalities have been proposed, including acoustic and RF signals, most approaches utilize photonic sensors in the visible spectrum due to the availability of hardware, and better temporal and spatial resolution. Approaches range from active time-resolved measurements, time-averaged continuous wave sources, and even to passive exploitation of ambient illumination.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122123701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Intelligent tools for creative graphics 创意图形的智能工具
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407498
Ariel Shamir, N. Mitra, Nobuyuki Umetani, Yuki Koyama
In recent years, much research has been dedicated to the development of "intelligent tools" that can assist both professionals as well as novices in the process of creation. Using the computational power of the machine, and involving advanced techniques, the tools handle complex and tedious tasks that were difficult or even impossible for humans, thereby freeing the human creator of many constraints and allowing her to concentrate on the creative process, while ensuring high-quality and valid design. This course is aimed at presenting some of the key technologies used to assist interactive creative processes. The course allows researchers and practitioners to understand these techniques more deeply, and possibly inspire them to research this subject and create intelligent tools themselves. More specifically, the course will concentrate on four main enabling technologies: geometric reasoning, physical constraints, data-driven techniques and machine learning, and crowdsourcing. In each of these areas the course will survey several recent papers and works and provide examples of using these in the creation of a variety of outputs: 3D models, animations, images, videos and more.
近年来,很多研究都致力于开发“智能工具”,可以帮助专业人士和新手在创作过程中。利用机器的计算能力,并涉及先进的技术,这些工具可以处理复杂而繁琐的任务,这些任务对人类来说是困难的,甚至是不可能的,从而使人类创造者摆脱了许多限制,使她能够专注于创作过程,同时确保高质量和有效的设计。本课程旨在介绍一些用于协助互动创作过程的关键技术。该课程使研究人员和实践者能够更深入地了解这些技术,并可能启发他们研究这一主题并自己创造智能工具。更具体地说,这门课程将集中于四种主要的使能技术:几何推理、物理约束、数据驱动技术和机器学习,以及众包。在这些领域中,课程将调查一些最近的论文和作品,并提供使用这些在创建各种输出的例子:3D模型,动画,图像,视频等。
{"title":"Intelligent tools for creative graphics","authors":"Ariel Shamir, N. Mitra, Nobuyuki Umetani, Yuki Koyama","doi":"10.1145/3388769.3407498","DOIUrl":"https://doi.org/10.1145/3388769.3407498","url":null,"abstract":"In recent years, much research has been dedicated to the development of \"intelligent tools\" that can assist both professionals as well as novices in the process of creation. Using the computational power of the machine, and involving advanced techniques, the tools handle complex and tedious tasks that were difficult or even impossible for humans, thereby freeing the human creator of many constraints and allowing her to concentrate on the creative process, while ensuring high-quality and valid design. This course is aimed at presenting some of the key technologies used to assist interactive creative processes. The course allows researchers and practitioners to understand these techniques more deeply, and possibly inspire them to research this subject and create intelligent tools themselves. More specifically, the course will concentrate on four main enabling technologies: geometric reasoning, physical constraints, data-driven techniques and machine learning, and crowdsourcing. In each of these areas the course will survey several recent papers and works and provide examples of using these in the creation of a variety of outputs: 3D models, animations, images, videos and more.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Physics-based differentiable rendering: from theory to implementation 基于物理的可微分渲染:从理论到实现
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407454
Shuang Zhao, Wenzel Jakob, Tzu-Mao Li
Physics-based rendering algorithms generate photorealistic images by simulating the flow of light through a detailed mathematical representation of a virtual scene. In contrast, physics-based differentiable rendering algorithms focus on computing derivative of images exhibiting complex light transport effects (e.g., soft shadows, interreflection, and caustics) with respect to arbitrary scene parameters such as camera pose, object geometry (e.g., vertex positions) as well as spatially varying material properties expressed as 2D textures and 3D volumes. This new level of generality has made physics-based differentiable rendering a key ingredient for solving many challenging inverse-rendering problems, that is, the search of scene configurations optimizing user-specified objective functions, using gradient-based methods (as illustrated in the figure below). Further, these techniques can be incorporated into probabilistic inference and machine learning pipelines. For instance, differentiable renderers allow "rendering losses" to be computed with complex light transport effects captured. Additionally, they can be used as generative models that synthesize photorealistic images.
基于物理的渲染算法通过对虚拟场景的详细数学表示来模拟光的流动,从而生成逼真的图像。相比之下,基于物理的可微分渲染算法侧重于计算图像的导数,这些图像显示出复杂的光传输效果(例如,软阴影,互反射和焦散),相对于任意场景参数,如相机姿势,物体几何形状(例如,顶点位置)以及以2D纹理和3D体积表示的空间变化的材料属性。这种新的普遍性使得基于物理的可微渲染成为解决许多具有挑战性的反渲染问题的关键因素,即使用基于梯度的方法搜索优化用户指定目标函数的场景配置(如下图所示)。此外,这些技术可以整合到概率推理和机器学习管道中。例如,可微分渲染器允许用捕获的复杂光传输效果计算“渲染损失”。此外,它们可以用作合成逼真图像的生成模型。
{"title":"Physics-based differentiable rendering: from theory to implementation","authors":"Shuang Zhao, Wenzel Jakob, Tzu-Mao Li","doi":"10.1145/3388769.3407454","DOIUrl":"https://doi.org/10.1145/3388769.3407454","url":null,"abstract":"Physics-based rendering algorithms generate photorealistic images by simulating the flow of light through a detailed mathematical representation of a virtual scene. In contrast, physics-based differentiable rendering algorithms focus on computing derivative of images exhibiting complex light transport effects (e.g., soft shadows, interreflection, and caustics) with respect to arbitrary scene parameters such as camera pose, object geometry (e.g., vertex positions) as well as spatially varying material properties expressed as 2D textures and 3D volumes. This new level of generality has made physics-based differentiable rendering a key ingredient for solving many challenging inverse-rendering problems, that is, the search of scene configurations optimizing user-specified objective functions, using gradient-based methods (as illustrated in the figure below). Further, these techniques can be incorporated into probabilistic inference and machine learning pipelines. For instance, differentiable renderers allow \"rendering losses\" to be computed with complex light transport effects captured. Additionally, they can be used as generative models that synthesize photorealistic images.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A whirlwind introduction to computer graphics 快速介绍计算机图形学
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407457
M. Bailey
{"title":"A whirlwind introduction to computer graphics","authors":"M. Bailey","doi":"10.1145/3388769.3407457","DOIUrl":"https://doi.org/10.1145/3388769.3407457","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to cinematic scientific visualization 电影科学可视化导论
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407502
Kalina Borkiewicz, A. Christensen, R. Wyatt, E. Wright
The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term "Renaissance Team", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.
高级可视化实验室(AVL)是位于伊利诺伊大学厄巴纳-香槟分校的国家超级计算应用中心(NCSA)的一部分。AVL由Donna Cox教授领导,她创造了“复兴团队”一词,并相信将不同背景的专家聚集在一起可以创建一个大于其部分总和的团队,AVL团队的成员反映了我们的跨学科性。我们专注于为公众提供高质量的超级计算机模拟科学可视化。
{"title":"Introduction to cinematic scientific visualization","authors":"Kalina Borkiewicz, A. Christensen, R. Wyatt, E. Wright","doi":"10.1145/3388769.3407502","DOIUrl":"https://doi.org/10.1145/3388769.3407502","url":null,"abstract":"The Advanced Visualization Lab (AVL) is part of the the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. The AVL is led by Professor Donna Cox, who coined the term \"Renaissance Team\", with the belief that bringing together specialists of diverse backgrounds creates a team that is greater than the sum of its parts, and members of the AVL team reflect that in our interdisciplinarity. We specialize in creating high-quality cinematic scientific visualizations of supercomputer simulations for public outreach.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114403924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep optics: joint design of optics and image recovery algorithms for domain specific cameras 深光学:联合设计光学和图像恢复算法的领域特定的相机
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407486
Yifan Peng, A. Veeraraghavan, W. Heidrich, Gordon Wetzstein
Application-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.
将定制光学器件与现代图像恢复算法相结合的应用领域专用相机正迅速引起人们的兴趣,其广泛应用包括用于物联网或无人机的超薄相机,以及用于显微镜和科学成像的计算相机。现有的成像光学元件设计方法要么是启发式的,要么是在点扩散函数上使用一些代理度量,而不考虑后处理后的图像质量。如果没有真正的端到端联合优化流程,对于给定的视觉任务,仍然难以找到最优的计算相机。虽然这种联合设计概念长期以来一直是计算摄影的核心思想,但直到现在,计算工具才准备好通过机器学习的进步有效地解释真正的端到端成像过程。我们描述了衍射光学的使用,使镜头不仅表现出紧凑的物理外观,而且灵活和大的设计自由度。通过建立可微射线或波光学模拟模型,将真实源图像映射到重建图像,可以联合训练光学编码器和电子解码器。编码器采用物理光学的PSF参数化,解码器采用卷积神经网络参数化。通过运行一组广泛的图像并定义特定领域的损失函数,光学和图像处理算法的参数被联合学习。我们描述了扩展景深、大视场和高动态范围成像的典型摄影应用。我们还描述了这种联合设计在机器视觉和科学成像场景中的推广。在这一点上,我们描述了一个端到端学习,光学编码的超分辨率SPAD相机,以及一个基于混合光电卷积层的光学图像分类优化。此外,我们探索了优化相位掩模的无透镜成像,以实现超薄相机,高分辨率波前传感和人脸检测。
{"title":"Deep optics: joint design of optics and image recovery algorithms for domain specific cameras","authors":"Yifan Peng, A. Veeraraghavan, W. Heidrich, Gordon Wetzstein","doi":"10.1145/3388769.3407486","DOIUrl":"https://doi.org/10.1145/3388769.3407486","url":null,"abstract":"Application-domain-specific cameras that combine customized optics with modern image recovery algorithms are of rapidly growing interest, with widespread applications like ultrathin cameras for internet-of-things or drones, as well as computational cameras for microscopy and scientific imaging. Existing approaches of designing imaging optics are either heuristic or use some proxy metric on the point spread function rather than considering the image quality after post-processing. Without a true end-to-end flow of joint optimization, it remains elusive to find an optimal computational camera for a given visual task. Although this joint design concept has been the core idea of computational photography for a long time, but that only nowadays the computational tools are ready to efficiently interpret a true end-to-end imaging process via machine learning advances. We describe the use of diffractive optics to enable lenses not only showing the compact physical appearance, but also flexible and large design degree of freedom. By building a differentiable ray or wave optics simulation model that maps the true source image to the reconstructed one, one can jointly train an optical encoder and electronic decoder. The encoder can be parameterized by the PSF of physical optics, and the decoder a convolutional neural network. By running over a broad set of images and defining domain-specific loss functions, parameters of the optics and image processing algorithms are jointly learned. We describe typical photography applications for extended depth-of-field, large field-of-view, and high-dynamic-range imaging. We also describe the generalization of this joint-design to machine vision and scientific imaging scenarios. To this point, we describe an end-to-end learned, optically coded super-resolution SPAD camera, and a hybrid optical-electronic convolutional layer based optimization of optics for image classification. Additionally, we explore lensless imaging with optimized phase masks for realizing an ultra-thin camera, a high-resolution wavefront sensing, and face detection.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116142395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color basics for digital media and visualization 数字媒体和可视化的色彩基础知识
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407478
T. Rhyne
{"title":"Color basics for digital media and visualization","authors":"T. Rhyne","doi":"10.1145/3388769.3407478","DOIUrl":"https://doi.org/10.1145/3388769.3407478","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130901017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Taichi programming language 太极编程语言
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407493
Yuanming Hu
{"title":"The Taichi programming language","authors":"Yuanming Hu","doi":"10.1145/3388769.3407493","DOIUrl":"https://doi.org/10.1145/3388769.3407493","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127409769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Moving mobile graphics 移动移动图形
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407515
J. Barker, Sam Martin, R. Guy, Jose-Emilio Munoz-Lopez, Arseny Kapoulkine, Kay Chang
{"title":"Moving mobile graphics","authors":"J. Barker, Sam Martin, R. Guy, Jose-Emilio Munoz-Lopez, Arseny Kapoulkine, Kay Chang","doi":"10.1145/3388769.3407515","DOIUrl":"https://doi.org/10.1145/3388769.3407515","url":null,"abstract":"","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133603952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in Monte Carlo rendering: the legacy of Jaroslav Křivánek 蒙特卡洛渲染的进步:雅罗斯拉夫的遗产Křivánek
Pub Date : 2020-08-17 DOI: 10.1145/3388769.3407458
A. Keller, Pascal Grittmann, J. Vorba, Iliyan Georgiev, M. Sik, Eugene d'Eon, Pascal Gautron, Petr Vévoda, Ivo Kondapaneni
Jaroslav Křivánek's research aimed at finding the one robust and efficient light transport simulation algorithm that would handle any given scene with any complexity of transport. He had a clear and unique vision of how to reach this ambitious goal. On his way, he created an impressive track of significant research contributions. In this course, his collaborators will tell the story of Jaroslav's quest for that "one" algorithm and discuss his impact and legacy.
Jaroslav Křivánek的研究旨在找到一种强大而有效的光传输模拟算法,该算法可以处理任何给定场景的任何传输复杂性。他对如何实现这一雄心勃勃的目标有着清晰而独特的见解。在他的道路上,他创造了一系列令人印象深刻的重大研究贡献。在本课程中,他的合作者将讲述Jaroslav对“一”算法的追求,并讨论他的影响和遗产。
{"title":"Advances in Monte Carlo rendering: the legacy of Jaroslav Křivánek","authors":"A. Keller, Pascal Grittmann, J. Vorba, Iliyan Georgiev, M. Sik, Eugene d'Eon, Pascal Gautron, Petr Vévoda, Ivo Kondapaneni","doi":"10.1145/3388769.3407458","DOIUrl":"https://doi.org/10.1145/3388769.3407458","url":null,"abstract":"Jaroslav Křivánek's research aimed at finding the one robust and efficient light transport simulation algorithm that would handle any given scene with any complexity of transport. He had a clear and unique vision of how to reach this ambitious goal. On his way, he created an impressive track of significant research contributions. In this course, his collaborators will tell the story of Jaroslav's quest for that \"one\" algorithm and discuss his impact and legacy.","PeriodicalId":167147,"journal":{"name":"ACM SIGGRAPH 2020 Courses","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
ACM SIGGRAPH 2020 Courses
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1