首页 > 最新文献

Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
On Ray Reordering Techniques for Faster GPU Ray Tracing 关于更快的GPU光线追踪的光线重排序技术
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384534
Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner
We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.
我们研究了光线重排序作为提高现有GPU光线跟踪实现性能的工具。我们关注的是与特定跟踪内核完全无关的射线重排序。我们总结了现有的计算射线排序键的方法,并讨论了它们的性质。我们提出了一种新的修改以前提出的方法,使用终止点估计,是非常适合跟踪二次射线。我们在使用RTX跟踪内核的波前路径跟踪的背景下评估了射线重排序技术。我们表明,射线重排序在最近的gpu(1.3−2.0 x)上产生显着更高的跟踪速度,但在硬件加速跟踪阶段恢复重排序开销是有问题的。
{"title":"On Ray Reordering Techniques for Faster GPU Ray Tracing","authors":"Daniel Meister, Jakub Boksanský, M. Guthe, Jiří Bittner","doi":"10.1145/3384382.3384534","DOIUrl":"https://doi.org/10.1145/3384382.3384534","url":null,"abstract":"We study ray reordering as a tool for increasing the performance of existing GPU ray tracing implementations. We focus on ray reordering that is fully agnostic to the particular trace kernel. We summarize the existing methods for computing the ray sorting keys and discuss their properties. We propose a novel modification of a previously proposed method using the termination point estimation that is well-suited to tracing secondary rays. We evaluate the ray reordering techniques in the context of the wavefront path tracing using the RTX trace kernels. We show that ray reordering yields significantly higher trace speed on recent GPUs (1.3 − 2.0 ×), but to recover the reordering overhead in the hardware-accelerated trace phase is problematic.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"159 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89630519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Repurposing a Relighting Network for Realistic Compositions of Captured Scenes 重新利用一个重照明网络的现实构图捕获的场景
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384523
Baptiste Nicolet, J. Philip, G. Drettakis
Multi-view stereo can be used to rapidly create realistic virtual content, such as textured meshes or a geometric proxy for free-viewpoint Image-Based Rendering (IBR). These solutions greatly simplify the content creation process compared to traditional methods, but it is difficult to modify the content of the scene. We propose a novel approach to create scenes by composing (parts of) multiple captured scenes. The main difficulty of such compositions is that lighting conditions in each captured scene are different; to obtain a realistic composition we need to make lighting coherent. We propose a two-pass solution, by adapting a multi-view relighting network. We first match the lighting conditions of each scene separately and then synthesize shadows between scenes in a subsequent pass. We also improve the realism of the composition by estimating the change in ambient occlusion in contact areas between parts and compensate for the color balance of the different cameras used for capture. We illustrate our method with results on multiple compositions of outdoor scenes and show its application to multi-view image composition, IBR and textured mesh creation.
多视点立体可以用于快速创建逼真的虚拟内容,如纹理网格或自由视点图像渲染(IBR)的几何代理。与传统方法相比,这些解决方案大大简化了内容创建过程,但难以修改场景内容。我们提出了一种通过组合多个捕获场景(部分)来创建场景的新方法。这种构图的主要困难在于每个拍摄场景的照明条件是不同的;为了获得逼真的构图,我们需要使灯光连贯。我们提出了一种双通道解决方案,通过采用多视图重照明网络。我们首先分别匹配每个场景的光照条件,然后在随后的通道中合成场景之间的阴影。我们还通过估计部分之间接触区域的环境遮挡变化来提高构图的真实感,并补偿用于捕获的不同相机的色彩平衡。我们用室外场景的多个合成结果来说明我们的方法,并展示了它在多视图图像合成、IBR和纹理网格创建方面的应用。
{"title":"Repurposing a Relighting Network for Realistic Compositions of Captured Scenes","authors":"Baptiste Nicolet, J. Philip, G. Drettakis","doi":"10.1145/3384382.3384523","DOIUrl":"https://doi.org/10.1145/3384382.3384523","url":null,"abstract":"Multi-view stereo can be used to rapidly create realistic virtual content, such as textured meshes or a geometric proxy for free-viewpoint Image-Based Rendering (IBR). These solutions greatly simplify the content creation process compared to traditional methods, but it is difficult to modify the content of the scene. We propose a novel approach to create scenes by composing (parts of) multiple captured scenes. The main difficulty of such compositions is that lighting conditions in each captured scene are different; to obtain a realistic composition we need to make lighting coherent. We propose a two-pass solution, by adapting a multi-view relighting network. We first match the lighting conditions of each scene separately and then synthesize shadows between scenes in a subsequent pass. We also improve the realism of the composition by estimating the change in ambient occlusion in contact areas between parts and compensate for the color balance of the different cameras used for capture. We illustrate our method with results on multiple compositions of outdoor scenes and show its application to multi-view image composition, IBR and textured mesh creation.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72800460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Contour-based 3D Modeling through Joint Embedding of Shapes and Contours 基于形状和轮廓联合嵌入的基于轮廓的三维建模
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384518
Aobo Jin, Q. Fu, Z. Deng
In this paper, we propose a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, we extract their occluding contours via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. We conduct various experiments and comparisons to demonstrate the usefulness and effectiveness of our method for sketch-based 3D modeling and shape manipulation applications.
在本文中,我们提出了一种新的空间,通过变分自编码器(VAE)和体积自编码器共同嵌入二维遮挡轮廓和三维形状。给定一个三维形状数据集,我们通过随机视图的投影提取它们的遮挡轮廓,并使用遮挡轮廓来训练VAE。然后,得到的连续嵌入空间,其中每个点是一个潜在向量,代表一个遮挡轮廓,可以用来衡量遮挡轮廓之间的相似性。之后,训练体积自编码器首先通过监督学习过程将3D形状映射到嵌入空间上,然后将3D形状的三个遮挡轮廓(来自三个不同视图)的合并潜在向量解码为其3D体素表示。我们进行了各种实验和比较,以证明我们的方法在基于草图的3D建模和形状操作应用中的实用性和有效性。
{"title":"Contour-based 3D Modeling through Joint Embedding of Shapes and Contours","authors":"Aobo Jin, Q. Fu, Z. Deng","doi":"10.1145/3384382.3384518","DOIUrl":"https://doi.org/10.1145/3384382.3384518","url":null,"abstract":"In this paper, we propose a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Given a dataset of 3D shapes, we extract their occluding contours via projections from random views and use the occluding contours to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. We conduct various experiments and comparisons to demonstrate the usefulness and effectiveness of our method for sketch-based 3D modeling and shape manipulation applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89384206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Role of the Field Dependence-independence Construct on the Flow-performance Link in Virtual Reality 虚拟现实中场依赖无关结构在流-性能环节中的作用
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384529
Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang
The flow experience-performance link is commonly found weak in virtual environments (VEs). The weak association model (WAM) suggests that distraction caused by disjointed features may be associated with the weak association. People characterized by field independent (FI) or field dependent (FD) cognitive style have different abilities in handling sustained attention, thus they may perform differently in the flow-performance link. To explore the role of the field dependence-independence (FDI) construct on the flow-performance link in virtual reality (VR), we developed a VR experimental environment, based on which two empirical studies were performed. Study 1 revealed FD individuals have higher dispersion degree of fixations and showed a weaker flow-performance link. Next, we provide visual cues that utilize distractors to achieve more task-oriented attention. Study 2 found it helps strengthen the task performance, as well as the flow-performance link of FD individuals without increasing distraction. This paper helps draw conclusions on the effects of human diversity on the flow-performance link in VEs and found ways to design a VR system according to individual characteristics.
在虚拟环境(ve)中,流体验与性能之间的联系通常很弱。弱关联模型(WAM)认为,由不连贯特征引起的注意力分散可能与弱关联有关。具有场独立(FI)和场依赖(FD)认知风格的人在处理持续注意方面具有不同的能力,因此他们在流-绩效环节上的表现可能不同。为了探讨虚拟现实(VR)中场依赖-独立(FDI)结构对流动-绩效联系的作用,我们构建了一个虚拟现实实验环境,并在此基础上进行了两项实证研究。研究1表明,FD个体具有较高的分散注视度,并且表现出较弱的流动-绩效联系。接下来,我们提供视觉线索,利用干扰物来获得更多的任务导向注意力。研究2发现,它有助于加强FD个体的任务绩效,以及流-绩效联系,而不会增加分心。本文总结了人的多样性对虚拟现实流动-绩效环节的影响,并找到了根据个体特征设计虚拟现实系统的方法。
{"title":"The Role of the Field Dependence-independence Construct on the Flow-performance Link in Virtual Reality","authors":"Yulong Bian, Chao Zhou, Yeqing Chen, Yanshuai Zhao, Juan Liu, Chenglei Yang","doi":"10.1145/3384382.3384529","DOIUrl":"https://doi.org/10.1145/3384382.3384529","url":null,"abstract":"The flow experience-performance link is commonly found weak in virtual environments (VEs). The weak association model (WAM) suggests that distraction caused by disjointed features may be associated with the weak association. People characterized by field independent (FI) or field dependent (FD) cognitive style have different abilities in handling sustained attention, thus they may perform differently in the flow-performance link. To explore the role of the field dependence-independence (FDI) construct on the flow-performance link in virtual reality (VR), we developed a VR experimental environment, based on which two empirical studies were performed. Study 1 revealed FD individuals have higher dispersion degree of fixations and showed a weaker flow-performance link. Next, we provide visual cues that utilize distractors to achieve more task-oriented attention. Study 2 found it helps strengthen the task performance, as well as the flow-performance link of FD individuals without increasing distraction. This paper helps draw conclusions on the effects of human diversity on the flow-performance link in VEs and found ways to design a VR system according to individual characteristics.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"145 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88443797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Real-time Muscle-based Facial Animation using Shell Elements and Force Decomposition 使用壳元素和力分解的实时基于肌肉的面部动画
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384531
Jungmin Kim, M. Choi, Young J. Kim
We present a novel algorithm for physics-based real-time facial animation driven by muscle deformation. Unlike the previous works using 3D finite elements, we use a 2D shell element to avoid inefficient or undesired tessellation due to the thin structure of facial muscles. To simplify the analysis and achieve real-time performance, we adopt real-time thin shell simulation of [Choi et al. 2007]. Our facial system is composed of four layers of skin, subcutaneous layer, muscles, and skull, based on human facial anatomy. Skin and muscles are composed of shell elements, subcutaneous fatty tissue is assumed as a uniform elastic body, and the fixed part of facial muscles is handled by static position constraint. We control muscles to have stretch deformation using modal analysis and apply mass-spring force to skin mesh which is triggered by the muscle deformation. In our system, only the region of interest for skin can be affected by the muscle. To handle the coupled result of facial animation, we decouple the system according to the type of external forces applied to the skin. We show a series of real-time facial animation caused by selected major muscles that are relevant to expressive skin deformation. Our system has generality for importing new types of muscles and skin mesh when their shape or positions are changed.
我们提出了一种由肌肉变形驱动的基于物理的实时面部动画算法。与之前使用3D有限元的作品不同,我们使用二维壳单元来避免由于面部肌肉的薄结构而导致的低效或不希望的镶嵌。为了简化分析并实现实时性,我们采用了[Choi et al. 2007]的实时薄壳仿真。我们的面部系统由四层皮肤、皮下层、肌肉和头骨组成,基于人类面部解剖学。皮肤和肌肉由壳单元组成,皮下脂肪组织假定为均匀弹性体,面部肌肉的固定部分采用静态位置约束处理。我们利用模态分析控制肌肉的拉伸变形,并将肌肉变形触发的质量-弹簧力施加到皮肤网格上。在我们的系统中,只有皮肤感兴趣的区域会受到肌肉的影响。为了处理面部动画的耦合结果,我们根据施加在皮肤上的外力类型对系统进行解耦。我们展示了一系列实时面部动画,由选定的主要肌肉引起,这些肌肉与表达性皮肤变形有关。当肌肉和皮肤网格的形状或位置发生变化时,我们的系统具有导入新类型肌肉和皮肤网格的通用性。
{"title":"Real-time Muscle-based Facial Animation using Shell Elements and Force Decomposition","authors":"Jungmin Kim, M. Choi, Young J. Kim","doi":"10.1145/3384382.3384531","DOIUrl":"https://doi.org/10.1145/3384382.3384531","url":null,"abstract":"We present a novel algorithm for physics-based real-time facial animation driven by muscle deformation. Unlike the previous works using 3D finite elements, we use a 2D shell element to avoid inefficient or undesired tessellation due to the thin structure of facial muscles. To simplify the analysis and achieve real-time performance, we adopt real-time thin shell simulation of [Choi et al. 2007]. Our facial system is composed of four layers of skin, subcutaneous layer, muscles, and skull, based on human facial anatomy. Skin and muscles are composed of shell elements, subcutaneous fatty tissue is assumed as a uniform elastic body, and the fixed part of facial muscles is handled by static position constraint. We control muscles to have stretch deformation using modal analysis and apply mass-spring force to skin mesh which is triggered by the muscle deformation. In our system, only the region of interest for skin can be affected by the muscle. To handle the coupled result of facial animation, we decouple the system according to the type of external forces applied to the skin. We show a series of real-time facial animation caused by selected major muscles that are relevant to expressive skin deformation. Our system has generality for importing new types of muscles and skin mesh when their shape or positions are changed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72705436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time Face Video Swapping From A Single Portrait 实时人脸视频交换从一个单一的肖像
Pub Date : 2020-05-04 DOI: 10.1145/3384382.3384519
Luming Ma, Z. Deng
We present a novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image. Specifically, we first reconstruct the illumination, albedo, camera parameters, and wrinkle-level geometric details from both the source image and the target video. Then, the albedo of the source face is modified by a novel harmonization method to match the target face. Finally, the source face is re-rendered and blended into the target video using the lighting and camera parameters from the target video. Our method runs fully automatically and at real-time rate on any target face captured by cameras or from legacy video. More importantly, unlike existing deep learning based methods, our method does not need to pre-train any models, i.e., pre-collecting a large image/video dataset of the source or target face for model training is not needed. We demonstrate that a high level of video-realism can be achieved by our method on a variety of human faces with different identities, ethnicities, skin colors, and expressions.
我们提出了一种新颖的高保真实时方法,将目标视频片段中的人脸替换为单源人像图像中的人脸。具体来说,我们首先从源图像和目标视频中重建照明、反照率、相机参数和皱纹级几何细节。然后,采用一种新的调和方法对源面反照率进行修正,使其与目标面相匹配。最后,源面被重新渲染,并使用目标视频中的照明和相机参数混合到目标视频中。我们的方法完全自动运行,并在任何目标面部实时速率由摄像机或从遗留视频捕获。更重要的是,与现有的基于深度学习的方法不同,我们的方法不需要预训练任何模型,即不需要预先收集源或目标面部的大型图像/视频数据集进行模型训练。我们证明,通过我们的方法,可以在具有不同身份、种族、肤色和表情的各种人脸上实现高水平的视频真实感。
{"title":"Real-time Face Video Swapping From A Single Portrait","authors":"Luming Ma, Z. Deng","doi":"10.1145/3384382.3384519","DOIUrl":"https://doi.org/10.1145/3384382.3384519","url":null,"abstract":"We present a novel high-fidelity real-time method to replace the face in a target video clip by the face from a single source portrait image. Specifically, we first reconstruct the illumination, albedo, camera parameters, and wrinkle-level geometric details from both the source image and the target video. Then, the albedo of the source face is modified by a novel harmonization method to match the target face. Finally, the source face is re-rendered and blended into the target video using the lighting and camera parameters from the target video. Our method runs fully automatically and at real-time rate on any target face captured by cameras or from legacy video. More importantly, unlike existing deep learning based methods, our method does not need to pre-train any models, i.e., pre-collecting a large image/video dataset of the source or target face for model training is not needed. We demonstrate that a high level of video-realism can be achieved by our method on a variety of human faces with different identities, ethnicities, skin colors, and expressions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90071095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Procedural band patterns 程序波段图
Pub Date : 2020-03-03 DOI: 10.1145/3384382.3384522
Jimmy Etienne, S. Lefebvre
We seek to cover a parametric domain with a set of evenly spaced bands which number and width varies according to a density field. We propose an implicit procedural algorithm, that generates the band pattern from a pixel shader and adapts to changes to the control fields in real time. Each band is uniquely identified by an integer. This allows a wide range of texturing effects, including specifying a different appearance in each individual bands. Our technique also affords for progressive gradations of scales, avoiding the abrupt doubling of the number of lines of typical subdivision approaches. This leads to a general approach for drawing bands, drawing splitting and merging curves, and drawing evenly spaced streamlines. Using these base ingredients, we demonstrate a wide variety of texturing effects.
我们试图用一组均匀间隔的频带覆盖一个参数域,这些频带的数量和宽度根据密度场而变化。我们提出了一种隐式程序算法,该算法从像素着色器生成带模式,并实时适应控制场的变化。每个波段由一个整数唯一标识。这允许广泛的纹理效果,包括在每个单独的波段指定不同的外观。我们的技术还提供了尺度的渐进渐变,避免了典型细分方法中线条数量的突然加倍。这导致了绘制条带,绘制分裂和合并曲线以及绘制均匀间隔流线的通用方法。使用这些基本成分,我们展示了各种各样的纹理效果。
{"title":"Procedural band patterns","authors":"Jimmy Etienne, S. Lefebvre","doi":"10.1145/3384382.3384522","DOIUrl":"https://doi.org/10.1145/3384382.3384522","url":null,"abstract":"We seek to cover a parametric domain with a set of evenly spaced bands which number and width varies according to a density field. We propose an implicit procedural algorithm, that generates the band pattern from a pixel shader and adapts to changes to the control fields in real time. Each band is uniquely identified by an integer. This allows a wide range of texturing effects, including specifying a different appearance in each individual bands. Our technique also affords for progressive gradations of scales, avoiding the abrupt doubling of the number of lines of typical subdivision approaches. This leads to a general approach for drawing bands, drawing splitting and merging curves, and drawing evenly spaced streamlines. Using these base ingredients, we demonstrate a wide variety of texturing effects.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85541671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
I3D '20: Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, September 15-17, 2020 I3D '20:交互式3D图形与游戏研讨会,2020年9月15-17日,美国旧金山
{"title":"I3D '20: Symposium on Interactive 3D Graphics and Games, San Francisco, CA, USA, September 15-17, 2020","authors":"","doi":"10.1145/3384382","DOIUrl":"https://doi.org/10.1145/3384382","url":null,"abstract":"","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering. 基于动态聚类的拓扑变化模型交互式连续碰撞检测。
Pub Date : 2015-02-01 DOI: 10.1145/2699276.2699286
Liang He, Ricardo Ortiz, Andinet Enquobahrie, Dinesh Manocha

We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.

提出了一种快速的可变形模型间连续碰撞检测算法。我们的方法不需要预先计算,并且可以处理发生拓扑变化的一般三角模型。我们提出了一种快速分解算法,该算法使用分层簇表示网格边界,并且只需要执行簇间碰撞检查。关键思想是快速计算这些集群并合并它们以生成动态边界卷层次结构。总体方法减少了计算层次结构的开销,也减少了误报的数量。我们强调了该算法在许多复杂的基准上的性能,这些基准是由医学模拟和崩溃分析产生的。在实践中,我们观察到在我们的基准测试中,可变形模型的CCD算法比先前的CCD算法加速1.4到5倍。
{"title":"Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering.","authors":"Liang He,&nbsp;Ricardo Ortiz,&nbsp;Andinet Enquobahrie,&nbsp;Dinesh Manocha","doi":"10.1145/2699276.2699286","DOIUrl":"https://doi.org/10.1145/2699276.2699286","url":null,"abstract":"<p><p>We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.</p>","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2015 ","pages":"47-54"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/2699276.2699286","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34303767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Real-time water drops and flows on glass panes 实时水滴和流动的玻璃面板
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448240
Kai-Chun Chen, Pei-Shan Chen, Sai-Keung Wong
Water drops and water flows exhibit interesting motion behaviors and amazing patterns on the surfaces of objects, such as leaves of plants and glass panes. Water drops and water flows are commonly seen in a rainy day. A water drop contains a small amount of water. The motion of a water drop is affected by various factors, including gravity, surface tension, cohesion force and adhesion [Zhang et al. 2012]. The situation is more complicated when we consider the roughness of the surface, surface purities and etc. Kaneda et al. [1993] proposed a discrete model of a glass plate for simulating the streams from the water droplets. The glass plate is divided into a grid. A water droplet is represented as a particle. The law of conservation of momentum is applied for merging droplets. A simple ray tracing technique is adopted for rendering the water droplets that are represented as spheres.
水滴和水流在物体表面表现出有趣的运动行为和惊人的图案,比如植物的叶子和玻璃面板。雨点和水流在雨天很常见。水滴含有少量的水。水滴的运动受到多种因素的影响,包括重力、表面张力、内聚力和附着力[Zhang et al. 2012]。当我们考虑到表面的粗糙度、表面纯度等因素时,情况就更加复杂了。Kaneda等人[1993]提出了一种用于模拟水滴流的玻璃板离散模型。玻璃板被划分成网格状。水滴被表示为一个粒子。动量守恒定律适用于液滴的合并。一个简单的光线追踪技术被用来渲染水滴,表示为球体。
{"title":"Real-time water drops and flows on glass panes","authors":"Kai-Chun Chen, Pei-Shan Chen, Sai-Keung Wong","doi":"10.1145/2448196.2448240","DOIUrl":"https://doi.org/10.1145/2448196.2448240","url":null,"abstract":"Water drops and water flows exhibit interesting motion behaviors and amazing patterns on the surfaces of objects, such as leaves of plants and glass panes. Water drops and water flows are commonly seen in a rainy day. A water drop contains a small amount of water. The motion of a water drop is affected by various factors, including gravity, surface tension, cohesion force and adhesion [Zhang et al. 2012]. The situation is more complicated when we consider the roughness of the surface, surface purities and etc. Kaneda et al. [1993] proposed a discrete model of a glass plate for simulating the streams from the water droplets. The glass plate is divided into a grid. A water droplet is represented as a particle. The law of conservation of momentum is applied for merging droplets. A simple ray tracing technique is adopted for rendering the water droplets that are represented as spheres.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"5 1","pages":"192"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75171261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1