Marilene Oliver, Scott R. Smallwood, Stephan Moore, J. Carpenter, Jonathan Cohn
We are constantly being warned that our personal data is vulnerable, that it is being used and abused by artificial intelligence, giant tech corporations and controlling governments. But do we really understand what "our data" consists of and what can be done with and to it? Is it possible to unravel the complex entanglements of data gathering and processing technologies in order to see and understand our data in a meaningful way? My Data Body is a virtual reality (VR) artwork that brings together some of our most personal and sensitive data such as medical scans, social media, biometric and social security data in an attempt to make visible and manipulable our many intersecting data corpuses so that they can be held, inspected, dissected and played with as a way to start understanding and answering these questions. My Data Body has been created as part of the interdisciplinary project Know Thyself as a Virtual Reality (KTVR), a multi-faceted project that explores the ethics and aesthetics of the contemporary "data body". KTVR brings together researchers across the arts and sciences, to innovate new creatives methodologies, educational resources and ethical guidelines for working artistically with personal data.
我们不断被警告,我们的个人数据很脆弱,正被人工智能、大型科技公司和控制政府使用和滥用。但是,我们真的了解“我们的数据”是由什么组成的,以及可以用它做什么和对它做什么吗?有可能解开数据收集和处理技术的复杂纠缠,以便以有意义的方式查看和理解我们的数据吗?我的数据体是一件虚拟现实(VR)艺术品,它汇集了我们一些最私人和最敏感的数据,如医疗扫描、社交媒体、生物特征和社会安全数据,试图使我们的许多交叉数据体变得可见和可操作,以便它们可以被保存、检查、解剖和播放,以此开始理解和回答这些问题。My Data Body是跨学科项目Know Thysself as a Virtual Reality(KTVR)的一部分,该项目是一个探索当代“数据体”伦理和美学的多方面项目。KTVR汇集了艺术和科学领域的研究人员,创新新的创意方法、教育资源和个人数据艺术化工作的道德准则。
{"title":"Dissecting My Data Body","authors":"Marilene Oliver, Scott R. Smallwood, Stephan Moore, J. Carpenter, Jonathan Cohn","doi":"10.1145/3533387","DOIUrl":"https://doi.org/10.1145/3533387","url":null,"abstract":"We are constantly being warned that our personal data is vulnerable, that it is being used and abused by artificial intelligence, giant tech corporations and controlling governments. But do we really understand what \"our data\" consists of and what can be done with and to it? Is it possible to unravel the complex entanglements of data gathering and processing technologies in order to see and understand our data in a meaningful way? My Data Body is a virtual reality (VR) artwork that brings together some of our most personal and sensitive data such as medical scans, social media, biometric and social security data in an attempt to make visible and manipulable our many intersecting data corpuses so that they can be held, inspected, dissected and played with as a way to start understanding and answering these questions. My Data Body has been created as part of the interdisciplinary project Know Thyself as a Virtual Reality (KTVR), a multi-faceted project that explores the ethics and aesthetics of the contemporary \"data body\". KTVR brings together researchers across the arts and sciences, to innovate new creatives methodologies, educational resources and ethical guidelines for working artistically with personal data.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 9"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44889690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scan processing is an analog electronic image manipulation technology which emerged in the late 1960's, reached its apex during the 1970's, and was made obsolete by digital computing in the 1980's. During this period, scan processing instruments such as the Scanimate (1969) and the Rutt/Etra Video Synthesizer (1973) revolutionized commercial animation and inspired a generation of experimental video artists. This paper presents a media archaeological examination of scan processing which analyzes the history and functioning of the instruments used, what sorts of possibilities they afforded their users, and how those affordances were realized with technology of the era. The author proposes the reenactment of historical media technologies as an investigative methodology which helps us understand the relation of past and present, and details a reenactment of scan processing involving the display of digitally synthesized audio signals on an analog Cathode Ray Tube vector monitor.
{"title":"In Search of the Plastic Image","authors":"Derek Holzer","doi":"10.1145/3539218","DOIUrl":"https://doi.org/10.1145/3539218","url":null,"abstract":"Scan processing is an analog electronic image manipulation technology which emerged in the late 1960's, reached its apex during the 1970's, and was made obsolete by digital computing in the 1980's. During this period, scan processing instruments such as the Scanimate (1969) and the Rutt/Etra Video Synthesizer (1973) revolutionized commercial animation and inspired a generation of experimental video artists. This paper presents a media archaeological examination of scan processing which analyzes the history and functioning of the instruments used, what sorts of possibilities they afforded their users, and how those affordances were realized with technology of the era. The author proposes the reenactment of historical media technologies as an investigative methodology which helps us understand the relation of past and present, and details a reenactment of scan processing involving the display of digitally synthesized audio signals on an analog Cathode Ray Tube vector monitor.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 8"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43046193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Gochfeld, Alex Coulombe, Yu-Jun Yeh, Robert C. Lester, R. B. Fleming, Zachary Meicher-Buzzi, Ari Tarr
During the COVID-19 pandemic, many theatre companies began experimenting with new technologies and ways to bring their work to audiences. As theatres have resumed in-person performances, they are exploring how these new techniques can be incorporated into their productions. Recently one leading regional theatre company produced two versions of A Christmas Carol in parallel - one presented on stage, and the other entirely in virtual reality. They used virtual production tools including real-time motion capture, virtual humans, game engine rendering, and a new platform for multi-user VR experiences. We discuss the process, challenges, and creative decisions behind these shows, with an eye towards informing future theatrical productions.
{"title":"A Tale of Two Productions","authors":"David Gochfeld, Alex Coulombe, Yu-Jun Yeh, Robert C. Lester, R. B. Fleming, Zachary Meicher-Buzzi, Ari Tarr","doi":"10.1145/3533612","DOIUrl":"https://doi.org/10.1145/3533612","url":null,"abstract":"During the COVID-19 pandemic, many theatre companies began experimenting with new technologies and ways to bring their work to audiences. As theatres have resumed in-person performances, they are exploring how these new techniques can be incorporated into their productions. Recently one leading regional theatre company produced two versions of A Christmas Carol in parallel - one presented on stage, and the other entirely in virtual reality. They used virtual production tools including real-time motion capture, virtual humans, game engine rendering, and a new platform for multi-user VR experiences. We discuss the process, challenges, and creative decisions behind these shows, with an eye towards informing future theatrical productions.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 9"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48326214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human action recognition continues to evolve and improve through deep learning techniques. There have been studies with some success in the field of action recognition, but only a few of them have focused on traditional dance. This is because dance actions, especially in traditional African dance, are long and involve fast movements. This research proposes a novel framework that applies data science algorithms to the field of cultural preservation by applying various deep learning techniques to identify, classify, and model traditional African dances from videos. Traditional dances are an important part of African culture and heritage. Digital preservation of these dances in their multitude and form is a challenging problem. The dance dataset was constituted from freely available YouTube videos. Four traditional African dances were used for the dance classification process: Adowa, Swange, Bata, and Sinte dance. Five Convolutional Neural Network (CNN) models were used for the classification and achieved an accuracy between 93% and 98%. Additionally, human pose estimation algorithms were applied to Sinte dance. A model of Sinte dance that can be exported to other environments was obtained.
{"title":"Traditional African Dances Preservation Using Deep Learning Techniques","authors":"A. E. Odefunso, E. Bravo, Victor Y. Chen","doi":"10.1145/3533608","DOIUrl":"https://doi.org/10.1145/3533608","url":null,"abstract":"Human action recognition continues to evolve and improve through deep learning techniques. There have been studies with some success in the field of action recognition, but only a few of them have focused on traditional dance. This is because dance actions, especially in traditional African dance, are long and involve fast movements. This research proposes a novel framework that applies data science algorithms to the field of cultural preservation by applying various deep learning techniques to identify, classify, and model traditional African dances from videos. Traditional dances are an important part of African culture and heritage. Digital preservation of these dances in their multitude and form is a challenging problem. The dance dataset was constituted from freely available YouTube videos. Four traditional African dances were used for the dance classification process: Adowa, Swange, Bata, and Sinte dance. Five Convolutional Neural Network (CNN) models were used for the classification and achieved an accuracy between 93% and 98%. Additionally, human pose estimation algorithms were applied to Sinte dance. A model of Sinte dance that can be exported to other environments was obtained.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 11"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47589648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the public art installation In Love With The World by artist Anicka Yi, which embodies a complex virtual ecosystem of autonomous agents in a physical space. The agents, called Aerobes, are inspired by the lifecycle of the Aurelia sp. jellyfish, and use artificial life techniques designed by the authors to simulate the behavior of two distinct phenotypes. These agents are embodied in the Tate Modern's Turbine Hall using lighter-than-air soft robotics utilizing helium that can respond to museum visitors through sensors embedded in the space. By creating organic-looking fully autonomous agents that are capable of real-time interaction, we hope to create an experience that causes viewers to question what living with machines might feel like in a speculative far-future, and to imagine an alternative form of artificial intelligence that is neither threatening to humanity nor subservient to it, but exists in an altogether parallel track as a new form of life.
本文描述了艺术家Anicka Yi的公共艺术装置In Love With the World,它体现了物理空间中由自主主体组成的复杂虚拟生态系统。这种被称为“需氧菌”的生物受到水母生命周期的启发,并使用作者设计的人工生命技术来模拟两种不同表型的行为。这些代理人在泰特现代美术馆的涡轮大厅中使用比空气轻的软体机器人,利用氦气,可以通过嵌入空间的传感器对博物馆游客做出反应。通过创造能够实时交互的有机外观的完全自主代理,我们希望创造一种体验,让观众质疑在遥远的未来与机器生活在一起会是什么感觉,并想象一种替代形式的人工智能,它既不会威胁到人类,也不会屈从于人类,而是作为一种新的生命形式存在于完全平行的轨道上。
{"title":"An Aquarium of Machines","authors":"Nathan S. Lachenmyer, Sadiya Akasha","doi":"10.1145/3533388","DOIUrl":"https://doi.org/10.1145/3533388","url":null,"abstract":"This paper describes the public art installation In Love With The World by artist Anicka Yi, which embodies a complex virtual ecosystem of autonomous agents in a physical space. The agents, called Aerobes, are inspired by the lifecycle of the Aurelia sp. jellyfish, and use artificial life techniques designed by the authors to simulate the behavior of two distinct phenotypes. These agents are embodied in the Tate Modern's Turbine Hall using lighter-than-air soft robotics utilizing helium that can respond to museum visitors through sensors embedded in the space. By creating organic-looking fully autonomous agents that are capable of real-time interaction, we hope to create an experience that causes viewers to question what living with machines might feel like in a speculative far-future, and to imagine an alternative form of artificial intelligence that is neither threatening to humanity nor subservient to it, but exists in an altogether parallel track as a new form of life.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 11"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41930811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ticha Sethapakdi, Mackenzie Leake, Catalina Monsalve Rodriguez, Miranda J. Cai, Stefanie Mueller
The kinegram is a classic animation technique that involves sliding a striped overlay across an interlaced image to create the effect of frame-by-frame motion. While there are known tools for generating kinegrams from preexisting videos and images, there exists no system for capturing and fabricating kinegrams in situ. To bridge this gap, we created KineCAM, an open source1 instant camera that captures and prints animated photographs in the form of kinegrams. We present our experience using KineCAM to create a portrait series, and discuss how this type of customizable instant camera platform can create new opportunities for experimental and social photography.
{"title":"KineCAM","authors":"Ticha Sethapakdi, Mackenzie Leake, Catalina Monsalve Rodriguez, Miranda J. Cai, Stefanie Mueller","doi":"10.1145/3533613","DOIUrl":"https://doi.org/10.1145/3533613","url":null,"abstract":"The kinegram is a classic animation technique that involves sliding a striped overlay across an interlaced image to create the effect of frame-by-frame motion. While there are known tools for generating kinegrams from preexisting videos and images, there exists no system for capturing and fabricating kinegrams in situ. To bridge this gap, we created KineCAM, an open source1 instant camera that captures and prints animated photographs in the form of kinegrams. We present our experience using KineCAM to create a portrait series, and discuss how this type of customizable instant camera platform can create new opportunities for experimental and social photography.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 9"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46309812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a computationally-efficient and numerically-robust algorithm for finding real roots of polynomials. It begins with determining the intervals where the given polynomial is monotonic. Then, it performs a robust variant of Newton iterations to find the real root within each interval, providing fast and guaranteed convergence and satisfying the given error bound, as permitted by the numerical precision used. For cubic polynomials, the algorithm is more accurate and faster than both the analytical solution and directly applying Newton iterations. It trivially extends to polynomials with arbitrary degrees, but it is limited to finding the real roots only and has quadratic worst-case complexity in terms of the polynomial's degree. We show that our method outperforms alternative polynomial solutions we tested up to degree 20. We also present an example rendering application with a known efficient numerical solution and show that our method provides faster, more accurate, and more robust solutions by solving polynomials of degree 10.
{"title":"High-Performance Polynomial Root Finding for Graphics","authors":"Cem Yuksel","doi":"10.1145/3543865","DOIUrl":"https://doi.org/10.1145/3543865","url":null,"abstract":"We present a computationally-efficient and numerically-robust algorithm for finding real roots of polynomials. It begins with determining the intervals where the given polynomial is monotonic. Then, it performs a robust variant of Newton iterations to find the real root within each interval, providing fast and guaranteed convergence and satisfying the given error bound, as permitted by the numerical precision used. For cubic polynomials, the algorithm is more accurate and faster than both the analytical solution and directly applying Newton iterations. It trivially extends to polynomials with arbitrary degrees, but it is limited to finding the real roots only and has quadratic worst-case complexity in terms of the polynomial's degree. We show that our method outperforms alternative polynomial solutions we tested up to degree 20. We also present an example rendering application with a known efficient numerical solution and show that our method provides faster, more accurate, and more robust solutions by solving polynomials of degree 10.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41843514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. M. Thomas, Gabor Liktor, Christoph Peters, Sung-ye Kim, K. Vaidyanathan, A. Forbes
Recent advances in ray tracing hardware bring real-time path tracing into reach, and ray traced soft shadows, glossy reflections, and diffuse global illumination are now common features in games. Nonetheless, ray budgets are still limited. This results in undersampling, which manifests as aliasing and noise. Prior work addresses these issues separately. While temporal supersampling methods based on neural networks have gained a wide use in modern games due to their better robustness, neural denoising remains challenging because of its higher computational cost. We introduce a novel neural network architecture for real-time rendering that combines supersampling and denoising, thus lowering the cost compared to two separate networks. This is achieved by sharing a single low-precision feature extractor with multiple higher-precision filter stages. To reduce cost further, our network takes low-resolution inputs and reconstructs a high-resolution denoised supersampled output. Our technique produces temporally stable high-fidelity results that significantly outperform state-of-the-art real-time statistical or analytical denoisers combined with TAA or neural upsampling to the target resolution.
{"title":"Temporally Stable Real-Time Joint Neural Denoising and Supersampling","authors":"M. M. Thomas, Gabor Liktor, Christoph Peters, Sung-ye Kim, K. Vaidyanathan, A. Forbes","doi":"10.1145/3543870","DOIUrl":"https://doi.org/10.1145/3543870","url":null,"abstract":"Recent advances in ray tracing hardware bring real-time path tracing into reach, and ray traced soft shadows, glossy reflections, and diffuse global illumination are now common features in games. Nonetheless, ray budgets are still limited. This results in undersampling, which manifests as aliasing and noise. Prior work addresses these issues separately. While temporal supersampling methods based on neural networks have gained a wide use in modern games due to their better robustness, neural denoising remains challenging because of its higher computational cost. We introduce a novel neural network architecture for real-time rendering that combines supersampling and denoising, thus lowering the cost compared to two separate networks. This is achieved by sharing a single low-precision feature extractor with multiple higher-precision filter stages. To reduce cost further, our network takes low-resolution inputs and reconstructs a high-resolution denoised supersampled output. Our technique produces temporally stable high-fidelity results that significantly outperform state-of-the-art real-time statistical or analytical denoisers combined with TAA or neural upsampling to the target resolution.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43106750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new ray tracing primitive---a curved ribbon, which is embedded inside a ruled surface. We describe two such surfaces. Ribbons inside doubly ruled bilinear patches can be intersected by solving a quadratic equation. We also consider a singly ruled surface with a directrix defined by a quadratic Bézier curve and a generator---by two linearly interpolated bitangent vectors. Intersecting such a surface requires solving a cubic equation, but it provides more fine-tuned control of the ribbon shape. These two primitives are smooth, composable, and allow fast non-iterative intersections. These are the first primitives that possess all such properties simultaneously.
{"title":"Ray/Ribbon Intersections","authors":"A. Reshetov","doi":"10.1145/3543862","DOIUrl":"https://doi.org/10.1145/3543862","url":null,"abstract":"We present a new ray tracing primitive---a curved ribbon, which is embedded inside a ruled surface. We describe two such surfaces. Ribbons inside doubly ruled bilinear patches can be intersected by solving a quadratic equation. We also consider a singly ruled surface with a directrix defined by a quadratic Bézier curve and a generator---by two linearly interpolated bitangent vectors. Intersecting such a surface requires solving a cubic equation, but it provides more fine-tuned control of the ribbon shape. These two primitives are smooth, composable, and allow fast non-iterative intersections. These are the first primitives that possess all such properties simultaneously.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 22"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43156611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a new approach to rendering production-style content with full path tracing in a data-distributed fashion---that is, with multiple collaborating nodes and/or GPUs that each store only part of the model. In particular, we propose a new approach to ray-forwarding based data-parallel ray tracing that improves over traditional spatial partitioning, that can support both object-hierarchy and spatial partitioning (or any combination thereof), and that employs multiple techniques for reducing the number of rays sent across the network. We show that this approach can simultaneously achieve higher flexibility in model partitioning, lower memory per node, lower bandwidth during rendering, and higher performance; and that it can ultimately achieve interactive rendering performance for non-trivial models with full path tracing even on quite moderate hardware resources with relatively low-end interconnect.
{"title":"Data Parallel Path Tracing with Object Hierarchies","authors":"I. Wald, S. Parker","doi":"10.1145/3543861","DOIUrl":"https://doi.org/10.1145/3543861","url":null,"abstract":"We propose a new approach to rendering production-style content with full path tracing in a data-distributed fashion---that is, with multiple collaborating nodes and/or GPUs that each store only part of the model. In particular, we propose a new approach to ray-forwarding based data-parallel ray tracing that improves over traditional spatial partitioning, that can support both object-hierarchy and spatial partitioning (or any combination thereof), and that employs multiple techniques for reducing the number of rays sent across the network. We show that this approach can simultaneously achieve higher flexibility in model partitioning, lower memory per node, lower bandwidth during rendering, and higher performance; and that it can ultimately achieve interactive rendering performance for non-trivial models with full path tracing even on quite moderate hardware resources with relatively low-end interconnect.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45904208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}