Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner
We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.
{"title":"On Ray Reordering Techniques for Faster GPU Ray Tracing","authors":"Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner","doi":"10.1145/3384382.3384534","DOIUrl":"https://doi.org/10.1145/3384382.3384534","url":null,"abstract":"We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"159 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89630519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-view stereo can be used to rapidly create realistic virtual content, such as textured meshes or a geometric proxy for free-viewpoint Image-Based Rendering (IBR). These solutions greatly simplify the content creation process compared to traditional methods, but it is difficult to modify the content of the scene. We propose a novel approach to create scenes by composing (parts of) multiple captured scenes. The main difficulty of such compositions is that lighting conditions in each captured scene are different; to obtain a realistic composition we need to make lighting coherent. We propose a two-pass solution, by adapting a multi-view relighting network. We first match the lighting conditions of each scene separately and then synthesize shadows between scenes in a subsequent pass. We also improve the realism of the composition by estimating the change in ambient occlusion in contact areas between parts and compensate for the color balance of the different cameras used for capture. We illustrate our method with results on multiple compositions of outdoor scenes and show its application to multi-view image composition, IBR and textured mesh creation.
{"title":"Repurposing a Relighting Network for Realistic Compositions of Captured Scenes","authors":"Baptiste Nicolet, J. Philip, G. Drettakis","doi":"10.1145/3384382.3384523","DOIUrl":"https://doi.org/10.1145/3384382.3384523","url":null,"abstract":"Multi-view stereo can be used to rapidly create realistic virtual content, such as textured meshes or a geometric proxy for free-viewpoint Image-Based Rendering (IBR). These solutions greatly simplify the content creation process compared to traditional methods, but it is difficult to modify the content of the scene. We propose a novel approach to create scenes by composing (parts of) multiple captured scenes. The main difficulty of such compositions is that lighting conditions in each captured scene are different; to obtain a realistic composition we need to make lighting coherent. We propose a two-pass solution, by adapting a multi-view relighting network. We first match the lighting conditions of each scene separately and then synthesize shadows between scenes in a subsequent pass. We also improve the realism of the composition by estimating the change in ambient occlusion in contact areas between parts and compensate for the color balance of the different cameras used for capture. We illustrate our method with results on multiple compositions of outdoor scenes and show its application to multi-view image composition, IBR and textured mesh creation.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72800460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, we extract their occluding contours via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. We conduct various experiments and comparisons to demonstrate the usefulness and effectiveness of our method for sketch-based 3D modeling and shape manipulation applications.
{"title":"Contour-based 3D Modeling through Joint Embedding of Shapes and Contours","authors":"Aobo Jin, Q. Fu, Z. Deng","doi":"10.1145/3384382.3384518","DOIUrl":"https://doi.org/10.1145/3384382.3384518","url":null,"abstract":"In this paper, we propose a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, we extract their occluding contours via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. We conduct various experiments and comparisons to demonstrate the usefulness and effectiveness of our method for sketch-based 3D modeling and shape manipulation applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89384206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang
The flow experience-performance link is commonly found weak in virtual environments (VEs). The weak association model (WAM) suggests that distraction caused by disjointed features may be associated with the weak association. People characterized by field independent (FI) or field dependent (FD) cognitive style have different abilities in handling sustained attention, thus they may perform differently in the flow-performance link. To explore the role of the field dependence-independence (FDI) construct on the flow-performance link in virtual reality (VR), we developed a VR experimental environment, based on which two empirical studies were performed. Study 1 revealed FD individuals have higher dispersion degree of fixations and showed a weaker flow-performance link. Next, we provide visual cues that utilize distractors to achieve more task-oriented attention. Study 2 found it helps strengthen the task performance, as well as the flow-performance link of FD individuals without increasing distraction. This paper helps draw conclusions on the effects of human diversity on the flow-performance link in VEs and found ways to design a VR system according to individual characteristics.
{"title":"The Role of the Field Dependence-independence Construct on the Flow-performance Link in Virtual Reality","authors":"Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang","doi":"10.1145/3384382.3384529","DOIUrl":"https://doi.org/10.1145/3384382.3384529","url":null,"abstract":"The flow experience-performance link is commonly found weak in virtual environments (VEs). The weak association model (WAM) suggests that distraction caused by disjointed features may be associated with the weak association. People characterized by field independent (FI) or field dependent (FD) cognitive style have different abilities in handling sustained attention, thus they may perform differently in the flow-performance link. To explore the role of the field dependence-independence (FDI) construct on the flow-performance link in virtual reality (VR), we developed a VR experimental environment, based on which two empirical studies were performed. Study 1 revealed FD individuals have higher dispersion degree of fixations and showed a weaker flow-performance link. Next, we provide visual cues that utilize distractors to achieve more task-oriented attention. Study 2 found it helps strengthen the task performance, as well as the flow-performance link of FD individuals without increasing distraction. This paper helps draw conclusions on the effects of human diversity on the flow-performance link in VEs and found ways to design a VR system according to individual characteristics.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"145 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88443797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel algorithm for physics-based real-time facial animation driven by muscle deformation. Unlike the previous works using 3D finite elements, we use a 2D shell element to avoid inefficient or undesired tessellation due to the thin structure of facial muscles. To simplify the analysis and achieve real-time performance, we adopt real-time thin shell simulation of [Choi et al. 2007]. Our facial system is composed of four layers of skin, subcutaneous layer, muscles, and skull, based on human facial anatomy. Skin and muscles are composed of shell elements, subcutaneous fatty tissue is assumed as a uniform elastic body, and the fixed part of facial muscles is handled by static position constraint. We control muscles to have stretch deformation using modal analysis and apply mass-spring force to skin mesh which is triggered by the muscle deformation. In our system, only the region of interest for skin can be affected by the muscle. To handle the coupled result of facial animation, we decouple the system according to the type of external forces applied to the skin. We show a series of real-time facial animation caused by selected major muscles that are relevant to expressive skin deformation. Our system has generality for importing new types of muscles and skin mesh when their shape or positions are changed.
我们提出了一种由肌肉变形驱动的基于物理的实时面部动画算法。与之前使用3D有限元的作品不同,我们使用二维壳单元来避免由于面部肌肉的薄结构而导致的低效或不希望的镶嵌。为了简化分析并实现实时性,我们采用了[Choi et al. 2007]的实时薄壳仿真。我们的面部系统由四层皮肤、皮下层、肌肉和头骨组成,基于人类面部解剖学。皮肤和肌肉由壳单元组成,皮下脂肪组织假定为均匀弹性体,面部肌肉的固定部分采用静态位置约束处理。我们利用模态分析控制肌肉的拉伸变形,并将肌肉变形触发的质量-弹簧力施加到皮肤网格上。在我们的系统中,只有皮肤感兴趣的区域会受到肌肉的影响。为了处理面部动画的耦合结果,我们根据施加在皮肤上的外力类型对系统进行解耦。我们展示了一系列实时面部动画,由选定的主要肌肉引起,这些肌肉与表达性皮肤变形有关。当肌肉和皮肤网格的形状或位置发生变化时,我们的系统具有导入新类型肌肉和皮肤网格的通用性。
{"title":"Real-time Muscle-based Facial Animation using Shell Elements and Force Decomposition","authors":"Jungmin Kim, M. Choi, Young J. Kim","doi":"10.1145/3384382.3384531","DOIUrl":"https://doi.org/10.1145/3384382.3384531","url":null,"abstract":"We present a novel algorithm for physics-based real-time facial animation driven by muscle deformation. Unlike the previous works using 3D finite elements, we use a 2D shell element to avoid inefficient or undesired tessellation due to the thin structure of facial muscles. To simplify the analysis and achieve real-time performance, we adopt real-time thin shell simulation of [Choi et al. 2007]. Our facial system is composed of four layers of skin, subcutaneous layer, muscles, and skull, based on human facial anatomy. Skin and muscles are composed of shell elements, subcutaneous fatty tissue is assumed as a uniform elastic body, and the fixed part of facial muscles is handled by static position constraint. We control muscles to have stretch deformation using modal analysis and apply mass-spring force to skin mesh which is triggered by the muscle deformation. In our system, only the region of interest for skin can be affected by the muscle. To handle the coupled result of facial animation, we decouple the system according to the type of external forces applied to the skin. We show a series of real-time facial animation caused by selected major muscles that are relevant to expressive skin deformation. Our system has generality for importing new types of muscles and skin mesh when their shape or positions are changed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72705436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image. Specifically, we first reconstruct the illumination, albedo, camera parameters, and wrinkle-level geometric details from both the source image and the target video. Then, the albedo of the source face is modified by a novel harmonization method to match the target face. Finally, the source face is re-rendered and blended into the target video using the lighting and camera parameters from the target video. Our method runs fully automatically and at real-time rate on any target face captured by cameras or from legacy video. More importantly, unlike existing deep learning based methods, our method does not need to pre-train any models, i.e., pre-collecting a large image/video dataset of the source or target face for model training is not needed. We demonstrate that a high level of video-realism can be achieved by our method on a variety of human faces with different identities, ethnicities, skin colors, and expressions.
{"title":"Real-time Face Video Swapping From A Single Portrait","authors":"Luming Ma, Z. Deng","doi":"10.1145/3384382.3384519","DOIUrl":"https://doi.org/10.1145/3384382.3384519","url":null,"abstract":"We present a novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image. Specifically, we first reconstruct the illumination, albedo, camera parameters, and wrinkle-level geometric details from both the source image and the target video. Then, the albedo of the source face is modified by a novel harmonization method to match the target face. Finally, the source face is re-rendered and blended into the target video using the lighting and camera parameters from the target video. Our method runs fully automatically and at real-time rate on any target face captured by cameras or from legacy video. More importantly, unlike existing deep learning based methods, our method does not need to pre-train any models, i.e., pre-collecting a large image/video dataset of the source or target face for model training is not needed. We demonstrate that a high level of video-realism can be achieved by our method on a variety of human faces with different identities, ethnicities, skin colors, and expressions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90071095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We seek to cover a parametric domain with a set of evenly spaced bands which number and width varies according to a density field. We propose an implicit procedural algorithm, that generates the band pattern from a pixel shader and adapts to changes to the control fields in real time. Each band is uniquely identified by an integer. This allows a wide range of texturing effects, including specifying a different appearance in each individual bands. Our technique also affords for progressive gradations of scales, avoiding the abrupt doubling of the number of lines of typical subdivision approaches. This leads to a general approach for drawing bands, drawing splitting and merging curves, and drawing evenly spaced streamlines. Using these base ingredients, we demonstrate a wide variety of texturing effects.
{"title":"Procedural band patterns","authors":"Jimmy Etienne, S. Lefebvre","doi":"10.1145/3384382.3384522","DOIUrl":"https://doi.org/10.1145/3384382.3384522","url":null,"abstract":"We seek to cover a parametric domain with a set of evenly spaced bands which number and width varies according to a density field. We propose an implicit procedural algorithm, that generates the band pattern from a pixel shader and adapts to changes to the control fields in real time. Each band is uniquely identified by an integer. This allows a wide range of texturing effects, including specifying a different appearance in each individual bands. Our technique also affords for progressive gradations of scales, avoiding the abrupt doubling of the number of lines of typical subdivision approaches. This leads to a general approach for drawing bands, drawing splitting and merging curves, and drawing evenly spaced streamlines. Using these base ingredients, we demonstrate a wide variety of texturing effects.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85541671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"I3D '20: Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, September 15-17, 2020","authors":"","doi":"10.1145/3384382","DOIUrl":"https://doi.org/10.1145/3384382","url":null,"abstract":"","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.
{"title":"Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering.","authors":"Liang He, Ricardo Ortiz, Andinet Enquobahrie, Dinesh Manocha","doi":"10.1145/2699276.2699286","DOIUrl":"https://doi.org/10.1145/2699276.2699286","url":null,"abstract":"<p><p>We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.</p>","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2015 ","pages":"47-54"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2699276.2699286","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34303767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Water drops and water flows exhibit interesting motion behaviors and amazing patterns on the surfaces of objects, such as leaves of plants and glass panes. Water drops and water flows are commonly seen in a rainy day. A water drop contains a small amount of water. The motion of a water drop is affected by various factors, including gravity, surface tension, cohesion force and adhesion [Zhang et al. 2012]. The situation is more complicated when we consider the roughness of the surface, surface purities and etc. Kaneda et al. [1993] proposed a discrete model of a glass plate for simulating the streams from the water droplets. The glass plate is divided into a grid. A water droplet is represented as a particle. The law of conservation of momentum is applied for merging droplets. A simple ray tracing technique is adopted for rendering the water droplets that are represented as spheres.
水滴和水流在物体表面表现出有趣的运动行为和惊人的图案,比如植物的叶子和玻璃面板。雨点和水流在雨天很常见。水滴含有少量的水。水滴的运动受到多种因素的影响,包括重力、表面张力、内聚力和附着力[Zhang et al. 2012]。当我们考虑到表面的粗糙度、表面纯度等因素时,情况就更加复杂了。Kaneda等人[1993]提出了一种用于模拟水滴流的玻璃板离散模型。玻璃板被划分成网格状。水滴被表示为一个粒子。动量守恒定律适用于液滴的合并。一个简单的光线追踪技术被用来渲染水滴,表示为球体。
{"title":"Real-time water drops and flows on glass panes","authors":"Kai-Chun Chen, Pei-Shan Chen, Sai-Keung Wong","doi":"10.1145/2448196.2448240","DOIUrl":"https://doi.org/10.1145/2448196.2448240","url":null,"abstract":"Water drops and water flows exhibit interesting motion behaviors and amazing patterns on the surfaces of objects, such as leaves of plants and glass panes. Water drops and water flows are commonly seen in a rainy day. A water drop contains a small amount of water. The motion of a water drop is affected by various factors, including gravity, surface tension, cohesion force and adhesion [Zhang et al. 2012]. The situation is more complicated when we consider the roughness of the surface, surface purities and etc. Kaneda et al. [1993] proposed a discrete model of a glass plate for simulating the streams from the water droplets. The glass plate is divided into a grid. A water droplet is represented as a particle. The law of conservation of momentum is applied for merging droplets. A simple ray tracing technique is adopted for rendering the water droplets that are represented as spheres.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"5 1","pages":"192"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75171261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}