In animation production, animators always spend significant time and efforts to develop quality deformation systems for characters with complex appearances and details. In order to decrease the time spent repetitively skinning and fine-tuning work, we propose an end-to-end approach to automatically compute deformations for new characters based on existing graph information of high-quality skinned character meshes. We adopt the idea of regarding mesh deformations as a combination of linear and nonlinear parts and propose a novel architecture for approximating complex nonlinear deformations. Linear deformations on the other hand are simple and therefore can be directly computed, although not precisely. To enable our network handle complicated graph data and inductively predict nonlinear deformations, we design the graph-attention-based (GAT) block to consist of an aggregation stream and a self-reinforced stream in order to aggregate the features of the neighboring nodes and strengthen the features of a single graph node. To reduce the difficulty of learning huge amount of mesh features, we introduce a dense connection pattern between a set of GAT blocks called “dense module” to ensure the propagation of features in our deep frameworks. These strategies allow the sharing of deformation features of existing well-skinned character models with new ones, which we call densely connected graph attention network (DenseGATs). We tested our DenseGATs and compared it with classical deformation methods and other graph-learning-based strategies. Experiments confirm that our network can predict highly plausible deformations for unseen characters.
{"title":"DenseGATs: A Graph-Attention-Based Network for Nonlinear Character Deformation","authors":"Tianxing Li, Rui Shi, T. Kanai","doi":"10.1145/3384382.3384525","DOIUrl":"https://doi.org/10.1145/3384382.3384525","url":null,"abstract":"In animation production, animators always spend significant time and efforts to develop quality deformation systems for characters with complex appearances and details. In order to decrease the time spent repetitively skinning and fine-tuning work, we propose an end-to-end approach to automatically compute deformations for new characters based on existing graph information of high-quality skinned character meshes. We adopt the idea of regarding mesh deformations as a combination of linear and nonlinear parts and propose a novel architecture for approximating complex nonlinear deformations. Linear deformations on the other hand are simple and therefore can be directly computed, although not precisely. To enable our network handle complicated graph data and inductively predict nonlinear deformations, we design the graph-attention-based (GAT) block to consist of an aggregation stream and a self-reinforced stream in order to aggregate the features of the neighboring nodes and strengthen the features of a single graph node. To reduce the difficulty of learning huge amount of mesh features, we introduce a dense connection pattern between a set of GAT blocks called “dense module” to ensure the propagation of features in our deep frameworks. These strategies allow the sharing of deformation features of existing well-skinned character models with new ones, which we call densely connected graph attention network (DenseGATs). We tested our DenseGATs and compared it with classical deformation methods and other graph-learning-based strategies. Experiments confirm that our network can predict highly plausible deformations for unseen characters.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"133 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79389774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaowei Zhang, C. May, G. Nishida, Daniel G. Aliaga
Automatic creation of lightweight 3D building models from satellite image data enables large and widespread 3D interactive urban rendering. Towards this goal, we present an inverse procedural modeling method to automatically create building envelopes from satellite imagery. Our key observation is that buildings exhibit regular properties. Hence, we can overcome the low-resolution, noisy, and partial building data obtained from satellite by using a two stage inverse procedural modeling technique. Our method takes in point cloud data obtained from multi-view satellite stereo processing and produces a crisp and regularized building envelope suitable for fast rendering and optional projective texture mapping. Further, our results show highly complete building models with quality superior to that of other compared-to approaches.
{"title":"Progressive Regularization of Satellite-Based 3D Buildings for Interactive Rendering","authors":"Xiaowei Zhang, C. May, G. Nishida, Daniel G. Aliaga","doi":"10.1145/3384382.3384526","DOIUrl":"https://doi.org/10.1145/3384382.3384526","url":null,"abstract":"Automatic creation of lightweight 3D building models from satellite image data enables large and widespread 3D interactive urban rendering. Towards this goal, we present an inverse procedural modeling method to automatically create building envelopes from satellite imagery. Our key observation is that buildings exhibit regular properties. Hence, we can overcome the low-resolution, noisy, and partial building data obtained from satellite by using a two stage inverse procedural modeling technique. Our method takes in point cloud data obtained from multi-view satellite stereo processing and produces a crisp and regularized building envelope suitable for fast rendering and optional projective texture mapping. Further, our results show highly complete building models with quality superior to that of other compared-to approaches.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89348800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel random-access depth map compression algorithm (RANDM) for interactive rendering. Our compressed representation provides random access to the depth values and enables real-time parallel decompression on commodity hardware. Our method partitions the depth range captured in a given scene into equal-sized intervals and uses this partition to generate three separate components that exhibit higher coherence. Each of these components is processed independently to generate the compressed stream. Our decompression algorithm is simple and performs prefix-sum computations while also decoding the entropy compressed blocks. We have evaluated the performance on large databases on depth maps and obtain a compression ratio of 20 − 100 × with a root-means-square (RMS) error of 0.05 − 2 in the disparity values of the depth map. The decompression algorithm is fast and takes about 1 microsecond per block on a single thread on an Intel Xeon CPU. To the best of our knowledge, RANDM is the first depth map compression algorithm that provides random access capability for interactive applications.
{"title":"RANDM: Random Access Depth Map Compression Using Range-Partitioning and Global Dictionary","authors":"Srihari Pratapa, Dinesh Manocha","doi":"10.1145/3384382.3384524","DOIUrl":"https://doi.org/10.1145/3384382.3384524","url":null,"abstract":"We present a novel random-access depth map compression algorithm (RANDM) for interactive rendering. Our compressed representation provides random access to the depth values and enables real-time parallel decompression on commodity hardware. Our method partitions the depth range captured in a given scene into equal-sized intervals and uses this partition to generate three separate components that exhibit higher coherence. Each of these components is processed independently to generate the compressed stream. Our decompression algorithm is simple and performs prefix-sum computations while also decoding the entropy compressed blocks. We have evaluated the performance on large databases on depth maps and obtain a compression ratio of 20 − 100 × with a root-means-square (RMS) error of 0.05 − 2 in the disparity values of the depth map. The decompression algorithm is fast and takes about 1 microsecond per block on a single thread on an Intel Xeon CPU. To the best of our knowledge, RANDM is the first depth map compression algorithm that provides random access capability for interactive applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80326944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new inverse modeling method to interactively design crowd animations. Few works focus on providing succinct high-level and large-scale crowd motion modeling. Our methodology is to read in real or virtual agent trajectory data and automatically infer a set of parameterized crowd motion models. Then, components of the motion models can be mixed, matched, and altered enabling rapidly producing new crowd motions. Our results show novel animations using real-world data, using synthetic data, and imitating real-world scenarios. Moreover, by combining our method with our interactive crowd trajectory sketching tool, we can create complex spatio-temporal crowd animations in about a minute.
{"title":"Interactive Inverse Spatio-Temporal Crowd Motion Design","authors":"T. Mathew, Bedrich Benes, Daniel G. Aliaga","doi":"10.1145/3384382.3384528","DOIUrl":"https://doi.org/10.1145/3384382.3384528","url":null,"abstract":"We introduce a new inverse modeling method to interactively design crowd animations. Few works focus on providing succinct high-level and large-scale crowd motion modeling. Our methodology is to read in real or virtual agent trajectory data and automatically infer a set of parameterized crowd motion models. Then, components of the motion models can be mixed, matched, and altered enabling rapidly producing new crowd motions. Our results show novel animations using real-world data, using synthetic data, and imitating real-world scenarios. Moreover, by combining our method with our interactive crowd trajectory sketching tool, we can create complex spatio-temporal crowd animations in about a minute.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73405047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The human visual and auditory systems are attentive to variations in both lighting and sound, and changes in either are used in audiovisual media to draw attention. In video games specifically, many embedded navigational cues are utilised to subtly suggest navigational choices to players. Both lighting and audio cues are commonly utilised by game designers to signal specific events or to draw player focus and therefore influence navigational decisions. We analyse the influence that combinations of landmark, auditory and illumination cues have on player navigation. 134 participants navigated through a randomly assigned subset of thirty structurally similar virtual mazes with variations of lighting, spatial audio and landmark cues. The solve times for these mazes were analysed to determine the influence of each individual cue and evaluate any cue competition or interactions effects detected. The findings demonstrate that auditory and subtle lighting had distinct effects on navigation and maze solve times and that interactions and cue competition effects were also evident.
{"title":"The Effect of Lighting, Landmarks and Auditory Cues on Human Performance in Navigating a Virtual Maze","authors":"Daryl Marples, Duke Gledhill, P. Carter","doi":"10.1145/3384382.3384527","DOIUrl":"https://doi.org/10.1145/3384382.3384527","url":null,"abstract":"The human visual and auditory systems are attentive to variations in both lighting and sound, and changes in either are used in audiovisual media to draw attention. In video games specifically, many embedded navigational cues are utilised to subtly suggest navigational choices to players. Both lighting and audio cues are commonly utilised by game designers to signal specific events or to draw player focus and therefore influence navigational decisions. We analyse the influence that combinations of landmark, auditory and illumination cues have on player navigation. 134 participants navigated through a randomly assigned subset of thirty structurally similar virtual mazes with variations of lighting, spatial audio and landmark cues. The solve times for these mazes were analysed to determine the influence of each individual cue and evaluate any cue competition or interactions effects detected. The findings demonstrate that auditory and subtle lighting had distinct effects on navigation and maze solve times and that interactions and cue competition effects were also evident.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"233 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81661543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the introduction of hardware-supported ray tracing and deep learning for denoising, computer graphics has made a considerable step toward real-time global illumination. In this work, we present an alternative global illumination method: The stochastic substitute tree (SST), a hierarchical structure inspired by lightcuts with light probability distributions as inner nodes. Our approach distributes virtual point lights (VPLs) in every frame and efficiently constructs the SST over those lights by clustering according to Morton codes. Global illumination is approximated by sampling the SST and considers the BRDF at the hit location as well as the SST nodes’ intensities for importance sampling directly from inner nodes of the tree. To remove the introduced Monte Carlo noise, we use a recurrent autoencoder. In combination with temporal filtering, we deliver real-time global illumination for complex scenes with challenging light distributions.
{"title":"Stochastic Substitute Trees for Real-Time Global Illumination","authors":"W. Tatzgern, B. Mayr, B. Kerbl, M. Steinberger","doi":"10.1145/3384382.3384521","DOIUrl":"https://doi.org/10.1145/3384382.3384521","url":null,"abstract":"With the introduction of hardware-supported ray tracing and deep learning for denoising, computer graphics has made a considerable step toward real-time global illumination. In this work, we present an alternative global illumination method: The stochastic substitute tree (SST), a hierarchical structure inspired by lightcuts with light probability distributions as inner nodes. Our approach distributes virtual point lights (VPLs) in every frame and efficiently constructs the SST over those lights by clustering according to Morton codes. Global illumination is approximated by sampling the SST and considers the BRDF at the hit location as well as the SST nodes’ intensities for importance sampling directly from inner nodes of the tree. To remove the introduced Monte Carlo noise, we use a recurrent autoencoder. In combination with temporal filtering, we deliver real-time global illumination for complex scenes with challenging light distributions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91211598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a user-guided system for accessible 3D reconstruction and modeling of real-world objects using multi-view stereo. The system is an interactive tool where the user models the object on top of multiple selected photographs. Our tool helps the user place quads correctly aligned to the photographs using a multi-view stereo algorithm. This algorithm in combination with user-provided information about topology, visibility, and how to separate foreground from background, creates favorable conditions in successfully reconstructing the object. The user only needs to manually specify a coarse topology which, followed by subdivision and a global optimization algorithm, creates an accurate model with the desired mesh density. This global optimization algorithm has a higher probability of converging to an accurate result than a fully automatic system. With our proposed tool, we lower the barrier of entry for creating high-quality 3D reconstructions of real-world objects with a desirable topology. Our interactive tool separates the most tedious and difficult parts of modeling to the computer, while giving the user control over the most common robustness issues in automatic 3D reconstruction. The provided workflow can be a preferable alternative to using automatic scanning techniques followed by re-topologization.
{"title":"User-guided 3D reconstruction using multi-view stereo","authors":"Sverker Rasmuson, Erik Sintorn, Ulf Assarsson","doi":"10.1145/3384382.3384530","DOIUrl":"https://doi.org/10.1145/3384382.3384530","url":null,"abstract":"We present a user-guided system for accessible 3D reconstruction and modeling of real-world objects using multi-view stereo. The system is an interactive tool where the user models the object on top of multiple selected photographs. Our tool helps the user place quads correctly aligned to the photographs using a multi-view stereo algorithm. This algorithm in combination with user-provided information about topology, visibility, and how to separate foreground from background, creates favorable conditions in successfully reconstructing the object. The user only needs to manually specify a coarse topology which, followed by subdivision and a global optimization algorithm, creates an accurate model with the desired mesh density. This global optimization algorithm has a higher probability of converging to an accurate result than a fully automatic system. With our proposed tool, we lower the barrier of entry for creating high-quality 3D reconstructions of real-world objects with a desirable topology. Our interactive tool separates the most tedious and difficult parts of modeling to the computer, while giving the user control over the most common robustness issues in automatic 3D reconstruction. The provided workflow can be a preferable alternative to using automatic scanning techniques followed by re-topologization.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79086301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe how to modify hardware page translation to enable CPU software access to compressed and swizzled GPU data arrays as if they were decompressed and stored in row-major order. In a shared memory system, this allows CPU to directly access the GPU data without copying the data or losing the performance and bandwidth benefits of using compression and swizzling on the GPU. Our method is flexible enough to support a wide variety of existing and future swizzling and compression schemes, including block-based lossless compression that requires per-block meta-data. Providing automatic compression can improve performance, even without considering the cost of copying data. In our experiments, we observed up to 33% reduction in CPU/memory energy use and up to 35% reduction in CPU computation time.
{"title":"Automatic GPU Data Compression and Address Swizzling for CPUs via Modified Virtual Address Translation","authors":"L. Seiler, Daqi Lin, Cem Yuksel","doi":"10.1145/3384382.3384533","DOIUrl":"https://doi.org/10.1145/3384382.3384533","url":null,"abstract":"We describe how to modify hardware page translation to enable CPU software access to compressed and swizzled GPU data arrays as if they were decompressed and stored in row-major order. In a shared memory system, this allows CPU to directly access the GPU data without copying the data or losing the performance and bandwidth benefits of using compression and swizzling on the GPU. Our method is flexible enough to support a wide variety of existing and future swizzling and compression schemes, including block-based lossless compression that requires per-block meta-data. Providing automatic compression can improve performance, even without considering the cost of copying data. In our experiments, we observed up to 33% reduction in CPU/memory energy use and up to 35% reduction in CPU computation time.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82121317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel algorithm to compute centroidal Voronoi tessellation using the GPU. It is based on the iterative approach of Lloyd’s method while having good considerations to address the two major challenges of achieving fast convergence with few iterations, and at the same time achieving fast computation within each iteration. Our implementation of the algorithm can complete the computation for a large image in the order of hundreds of milliseconds and is faster than all prior work on a state-of-the-art GPU. As such, it is now easier to integrate centroidal Voronoi tessellations into interactive applications.
{"title":"Computing Centroidal Voronoi Tessellation Using the GPU","authors":"Jiaqi Zheng, T. Tan","doi":"10.1145/3384382.3384520","DOIUrl":"https://doi.org/10.1145/3384382.3384520","url":null,"abstract":"We propose a novel algorithm to compute centroidal Voronoi tessellation using the GPU. It is based on the iterative approach of Lloyd’s method while having good considerations to address the two major challenges of achieving fast convergence with few iterations, and at the same time achieving fast computation within each iteration. Our implementation of the algorithm can complete the computation for a large image in the order of hundreds of milliseconds and is faster than all prior work on a state-of-the-art GPU. As such, it is now easier to integrate centroidal Voronoi tessellations into interactive applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88885455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. V. Toll, F. Grzeskowiak, Axel López-Gandía, Javad Amirian, Florian Berton, Julien Bruneau, Beatriz Cabrero Daniel, Alberto Jovane, J. Pettré
To simulate the low-level (‘microscopic’) behavior of human crowds, a local navigation algorithm computes how a single person (‘agent’) should move based on its surroundings. Many algorithms for this purpose have been proposed, each using different principles and implementation details that are difficult to compare. This paper presents a novel framework that describes local agent navigation generically as optimizing a cost function in a velocity space. We show that many state-of-the-art algorithms can be translated to this framework, by combining a particular cost function with a particular optimization method. As such, we can reproduce many types of local algorithms using a single general principle. Our implementation of this framework, named umans(Unified Microscopic Agent Navigation Simulator), is freely available online. This software enables easy experimentation with different algorithms and parameters. We expect that our work will help understand the true differences between navigation methods, enable honest comparisons between them, simplify the development of new local algorithms, make techniques available to other communities, and stimulate further research on crowd simulation.
{"title":"Generalized Microscropic Crowd Simulation using Costs in Velocity Space","authors":"W. V. Toll, F. Grzeskowiak, Axel López-Gandía, Javad Amirian, Florian Berton, Julien Bruneau, Beatriz Cabrero Daniel, Alberto Jovane, J. Pettré","doi":"10.1145/3384382.3384532","DOIUrl":"https://doi.org/10.1145/3384382.3384532","url":null,"abstract":"To simulate the low-level (‘microscopic’) behavior of human crowds, a local navigation algorithm computes how a single person (‘agent’) should move based on its surroundings. Many algorithms for this purpose have been proposed, each using different principles and implementation details that are difficult to compare. This paper presents a novel framework that describes local agent navigation generically as optimizing a cost function in a velocity space. We show that many state-of-the-art algorithms can be translated to this framework, by combining a particular cost function with a particular optimization method. As such, we can reproduce many types of local algorithms using a single general principle. Our implementation of this framework, named umans(Unified Microscopic Agent Navigation Simulator), is freely available online. This software enables easy experimentation with different algorithms and parameters. We expect that our work will help understand the true differences between navigation methods, enable honest comparisons between them, simplify the development of new local algorithms, make techniques available to other communities, and stimulate further research on crowd simulation.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2015 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86998553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}