首页 > 最新文献

Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games最新文献

英文 中文
Fast global illumination for interactive volume visualization 交互式体可视化快速全局照明
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448205
Yubo Zhang, K. Ma
High quality global illumination can enhance the visual perception of depth cue and local thickness of volumetric data but it is seldom used in scientific visualization because of its high computational cost. This paper presents a novel grid-based illumination technique which is specially designed and optimized for volume visualization purpose. It supports common light sources and dynamic transfer function editing. Our method models light propagation, including both absorption and scattering, in a volume using a convection-diffusion equation that can be solved numerically. The main advantage of such technique is that the light modeling and simulation can be separated, where we can use a unified partial-differential equation to model various illumination effects, and adopt highly-parallelized grid-based numerical schemes to solve it. Results show that our method can achieve high quality volume illumination with dynamic color and opacity mapping and various light sources in real-time. The added illumination effects can greatly enhance the visual perception of spatial structures of volume data.
高质量的全局照明可以增强体数据的深度线索和局部厚度的视觉感知,但由于其计算成本高,在科学可视化中很少使用。本文提出了一种新的基于网格的照明技术,该技术是专门为体可视化而设计和优化的。它支持常见光源和动态传递函数编辑。我们的方法模拟光的传播,包括吸收和散射,在一个体积使用对流扩散方程,可以解决数值。该技术的主要优点是可以将光的建模和仿真分离开来,使用统一的偏微分方程来模拟各种照明效果,并采用高度并行化的基于网格的数值格式进行求解。实验结果表明,该方法可以实现高质量的体照明,实时实现动态颜色和不透明度映射和各种光源。增加的照明效果可以大大增强体数据空间结构的视觉感受。
{"title":"Fast global illumination for interactive volume visualization","authors":"Yubo Zhang, K. Ma","doi":"10.1145/2448196.2448205","DOIUrl":"https://doi.org/10.1145/2448196.2448205","url":null,"abstract":"High quality global illumination can enhance the visual perception of depth cue and local thickness of volumetric data but it is seldom used in scientific visualization because of its high computational cost. This paper presents a novel grid-based illumination technique which is specially designed and optimized for volume visualization purpose. It supports common light sources and dynamic transfer function editing. Our method models light propagation, including both absorption and scattering, in a volume using a convection-diffusion equation that can be solved numerically. The main advantage of such technique is that the light modeling and simulation can be separated, where we can use a unified partial-differential equation to model various illumination effects, and adopt highly-parallelized grid-based numerical schemes to solve it. Results show that our method can achieve high quality volume illumination with dynamic color and opacity mapping and various light sources in real-time. The added illumination effects can greatly enhance the visual perception of spatial structures of volume data.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"24 1","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83243040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Multi-view ambient occlusion with importance sampling 具有重要采样的多视图环境遮挡
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448214
K. Vardis, Georgios Papaioannou, A. Gaitatzes
Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screen-space AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.
屏幕空间环境遮挡(AO)技术已经成为实时渲染中环境光衰减和接触阴影的实际方法。尽管已经进行了广泛的研究来提高AO技术的质量和性能,但是依赖于视图的工件仍然是一个主要问题。本文介绍了多视图环境遮挡,这是一种通用的逐片段视图加权方案,用于使用多个任意视图(如现成的阴影地图)评估屏幕空间遮挡或遮挡。此外,它利用产生的权重来执行自适应采样,基于每个视图的重要性,以减少样本总数,同时保持图像质量。多视图环境遮挡改善和稳定了屏幕空间AO估计,而不会高估结果,并且可以与各种现有的屏幕空间AO技术相结合。我们用基于开体积和实体角的AO算法演示了我们的采样方法的结果。
{"title":"Multi-view ambient occlusion with importance sampling","authors":"K. Vardis, Georgios Papaioannou, A. Gaitatzes","doi":"10.1145/2448196.2448214","DOIUrl":"https://doi.org/10.1145/2448196.2448214","url":null,"abstract":"Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screen-space AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"36 1","pages":"111-118"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89923771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Efficient motion retrieval in large motion databases 大型运动数据库中的高效运动检索
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448199
Mubbasir Kapadia, I-Kao Chiang, Tiju Thomas, N. Badler, Joseph T. Kider
There has been a recent paradigm shift in the computer animation industry with an increasing use of pre-recorded motion for animating virtual characters. A fundamental requirement to using motion capture data is an efficient method for indexing and retrieving motions. In this paper, we propose a flexible, efficient method for searching arbitrarily complex motions in large motion databases. Motions are encoded using keys which represent a wide array of structural, geometric and, dynamic features of human motion. Keys provide a representative search space for indexing motions and users can specify sequences of key values as well as multiple combination of key sequences to search for complex motions. We use a trie-based data structure to provide an efficient mapping from key sequences to motions. The search times (even on a single CPU) are very fast, opening the possibility of using large motion data sets in real-time applications.
最近,计算机动画行业出现了一种范式转变,即越来越多地使用预先录制的动作来制作虚拟角色动画。使用动作捕捉数据的一个基本要求是一种有效的索引和检索动作的方法。本文提出了一种在大型运动数据库中搜索任意复杂运动的灵活、高效的方法。运动是用键来编码的,这些键代表了人体运动的各种结构、几何和动态特征。键为索引运动提供了一个有代表性的搜索空间,用户可以指定键值序列以及键序列的多个组合来搜索复杂的运动。我们使用基于尝试的数据结构来提供从关键序列到运动的有效映射。搜索时间(即使在单个CPU上)非常快,这使得在实时应用程序中使用大型运动数据集成为可能。
{"title":"Efficient motion retrieval in large motion databases","authors":"Mubbasir Kapadia, I-Kao Chiang, Tiju Thomas, N. Badler, Joseph T. Kider","doi":"10.1145/2448196.2448199","DOIUrl":"https://doi.org/10.1145/2448196.2448199","url":null,"abstract":"There has been a recent paradigm shift in the computer animation industry with an increasing use of pre-recorded motion for animating virtual characters. A fundamental requirement to using motion capture data is an efficient method for indexing and retrieving motions. In this paper, we propose a flexible, efficient method for searching arbitrarily complex motions in large motion databases. Motions are encoded using keys which represent a wide array of structural, geometric and, dynamic features of human motion. Keys provide a representative search space for indexing motions and users can specify sequences of key values as well as multiple combination of key sequences to search for complex motions. We use a trie-based data structure to provide an efficient mapping from key sequences to motions. The search times (even on a single CPU) are very fast, opening the possibility of using large motion data sets in real-time applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"29 1","pages":"19-28"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88517415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
Creating a large area of trees based on FOREST PRO 基于FOREST PRO创建大面积的树木
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448230
Jincan Lin, Yun-Wen Huang, Junfeng Yao
Forest Pro makes you can create large area trees, bushes and other models in a short time. If you want to create 50000 trees in a scene, with each tree has more than 10 thousands multilateral type, even if the computer can drag, it may get stuck in the rendering time. Now create tens of thousands of trees through Forest Pro is easy, but also it won't get stuck in the rendering process, this is a big edge for the animators who often make outdoor landscape. Made by IToo company. As is shown in figure 1.
Forest Pro使您可以在短时间内创建大面积的树木,灌木和其他模型。如果你想在一个场景中创建50000棵树,每棵树都有超过10000种多边类型,即使计算机可以拖动,也可能会在渲染时间上卡住。现在通过Forest Pro创建数万棵树很容易,而且它也不会在渲染过程中被卡住,这对于经常制作户外景观的动画师来说是一个很大的优势。由IToo公司制造。如图1所示。
{"title":"Creating a large area of trees based on FOREST PRO","authors":"Jincan Lin, Yun-Wen Huang, Junfeng Yao","doi":"10.1145/2448196.2448230","DOIUrl":"https://doi.org/10.1145/2448196.2448230","url":null,"abstract":"Forest Pro makes you can create large area trees, bushes and other models in a short time. If you want to create 50000 trees in a scene, with each tree has more than 10 thousands multilateral type, even if the computer can drag, it may get stuck in the rendering time. Now create tens of thousands of trees through Forest Pro is easy, but also it won't get stuck in the rendering process, this is a big edge for the animators who often make outdoor landscape. Made by IToo company. As is shown in figure 1.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"11 2 1","pages":"182"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84079713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flip-flop: convex hull construction via star-shaped polyhedron in 3D 触发器:在3D中通过星形多面体构造凸壳
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448203
Mingcen Gao, Thanh-Tung Cao, T. Tan, Zhiyong Huang
Flipping is a local and efficient operation to construct the convex hull in an incremental fashion. However, it is known that the traditional flip algorithm is not able to compute the convex hull when applied to a polyhedron in R3. Our novel Flip-Flop algorithm is a variant of the flip algorithm. It overcomes the deficiency of the traditional one to always compute the convex hull of a given star-shaped polyhedron with provable correctness. Applying this to construct convex hull of a point set in R3, we develop ffHull, a flip algorithm that allows nonrestrictive insertion of many vertices before any flipping of edges. This is unlike the well-known incremental fashion of strictly alternating between inserting a single vertex and flipping. The new approach is not only simpler and more efficient for CPU implementation but also maps well to the massively parallel nature of the modern GPU. As shown in our experiments, ffHull running on the CPU is as fast as the best-known convex hull implementation, qHull. As for the GPU, ffHull also outperforms all known prior work. From this, we further obtain the first known solution to computing the 2D regular triangulation on the GPU.
翻转是一种以增量方式构造凸壳的局部高效操作。然而,传统的翻转算法在应用于R3中的多面体时,无法计算凸壳。我们的新flip - flop算法是flip算法的一种变体。它克服了传统方法总是计算给定星形多面体的凸壳且正确性可证明的缺点。将此应用于构建R3中点集的凸包,我们开发了一个翻转算法ffHull,该算法允许在任何翻转边之前无限制地插入多个顶点。这与众所周知的在插入单个顶点和翻转之间严格交替的增量方式不同。新方法不仅对CPU的实现更简单、更高效,而且很好地映射了现代GPU的大规模并行特性。正如我们的实验所示,在CPU上运行的ffHull与最著名的凸壳实现qHull一样快。至于GPU, ffHull也优于所有已知的先前工作。由此,我们进一步得到了已知的第一个在GPU上计算二维正则三角剖分的解决方案。
{"title":"Flip-flop: convex hull construction via star-shaped polyhedron in 3D","authors":"Mingcen Gao, Thanh-Tung Cao, T. Tan, Zhiyong Huang","doi":"10.1145/2448196.2448203","DOIUrl":"https://doi.org/10.1145/2448196.2448203","url":null,"abstract":"Flipping is a local and efficient operation to construct the convex hull in an incremental fashion. However, it is known that the traditional flip algorithm is not able to compute the convex hull when applied to a polyhedron in R3. Our novel Flip-Flop algorithm is a variant of the flip algorithm. It overcomes the deficiency of the traditional one to always compute the convex hull of a given star-shaped polyhedron with provable correctness. Applying this to construct convex hull of a point set in R3, we develop ffHull, a flip algorithm that allows nonrestrictive insertion of many vertices before any flipping of edges. This is unlike the well-known incremental fashion of strictly alternating between inserting a single vertex and flipping. The new approach is not only simpler and more efficient for CPU implementation but also maps well to the massively parallel nature of the modern GPU. As shown in our experiments, ffHull running on the CPU is as fast as the best-known convex hull implementation, qHull. As for the GPU, ffHull also outperforms all known prior work. From this, we further obtain the first known solution to computing the 2D regular triangulation on the GPU.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"53 1","pages":"45-54"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85714295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Displaying large user-generated virtual worlds from the cloud 显示来自云的大型用户生成的虚拟世界
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448231
T. Azim, Ewen Cheslack-Postava, P. Levis
Unlike most graphics systems, a shared, user-generated virtual world is created on-the-fly by end users rather than professional artists. Objects in the world can come and go, and the world can be composed of so many models and textures that it cannot be stored locally on disk. The content must be stored in a shared, networked resource such as the cloud and delivered to clients dynamically.
与大多数图形系统不同,共享的、用户生成的虚拟世界是由最终用户而不是专业艺术家即时创建的。世界中的物体可以来来去去,世界可以由如此多的模型和纹理组成,以至于它不能本地存储在磁盘上。内容必须存储在共享的网络资源(如云)中,并动态地交付给客户端。
{"title":"Displaying large user-generated virtual worlds from the cloud","authors":"T. Azim, Ewen Cheslack-Postava, P. Levis","doi":"10.1145/2448196.2448231","DOIUrl":"https://doi.org/10.1145/2448196.2448231","url":null,"abstract":"Unlike most graphics systems, a shared, user-generated virtual world is created on-the-fly by end users rather than professional artists. Objects in the world can come and go, and the world can be composed of so many models and textures that it cannot be stored locally on disk. The content must be stored in a shared, networked resource such as the cloud and delivered to clients dynamically.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":"183"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81785950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast hair collision handling using slice planes 快速头发碰撞处理使用切片平面
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448233
Witawat Rungjiratananon, Yoshihiro Kanamori, T. Nishita
In hair simulation, hair collision handling plays an important role to make hair look realistic; hair collision maintains the volume of hair. Without hair collision, hair would appear unnaturally flat.
在头发模拟中,头发碰撞处理对头发的真实感起着重要的作用;头发碰撞保持头发的体积。没有头发碰撞,头发会显得不自然的扁平。
{"title":"Fast hair collision handling using slice planes","authors":"Witawat Rungjiratananon, Yoshihiro Kanamori, T. Nishita","doi":"10.1145/2448196.2448233","DOIUrl":"https://doi.org/10.1145/2448196.2448233","url":null,"abstract":"In hair simulation, hair collision handling plays an important role to make hair look realistic; hair collision maintains the volume of hair. Without hair collision, hair would appear unnaturally flat.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"63 1","pages":"185"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82607758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple method for high quality artist-driven lip syncing 一个简单的方法,高质量的艺术家驱动的口型
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448229
Yuyu Xu, Andrew W. Feng, Ari Shapiro
Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. We present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, runs in real time, and can be personalized for each character. Our method associates animation curves designed by an animator on a fixed set of static facial poses, with sequential pairs of phonemes (diphones), and then stitch the diphones together to create a set of curves for the facial poses. Diphone- and triphone-based methods have been explored in various previous works [Deng et al. 2006], often requiring machine learning. However, our experiments have shown that diphones are sufficient for producing high-quality lip syncing, and that longer sequences of phonemes are not necessary. Our experiments have shown that skilled animators can sufficiently generate the data needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions [Cohen and Massaro 1993] or language rules. Such rules are implicit within the artist-produced data. In order to produce a tractable set of data, our method reduces the full set of 40 English phonemes to a smaller set of 21, which are then annotated by an animator. Once the full diphone set of animations has been generated, it can be reused for multiple characters. Each additional character requires a small set of eight static poses or blendshapes. In addition, each language requires a new set of diphones, although similar phonemes among languages can share the same diphone curves. We show how to reuse our English diphone set to adapt to a Mandarin diphone set.
将嘴唇和嘴巴的动作与动画自然同步是令人信服的3D角色表演的重要组成部分。我们提出了一个简单,便携和可编辑的唇同步方法,适用于多种语言,不需要机器学习,可以由熟练的动画师构建,实时运行,并且可以为每个角色个性化。我们的方法将动画师在一组固定的静态面部姿势上设计的动画曲线与连续的音素对(双音素)相关联,然后将这些双音素拼接在一起,为面部姿势创建一组曲线。在以前的各种工作中已经探索了基于Diphone和triphone的方法[Deng et al. 2006],通常需要机器学习。然而,我们的实验表明,双音器足以产生高质量的口型,而更长的音素序列是不必要的。我们的实验表明,熟练的动画师可以充分生成高质量结果所需的数据。因此,我们的算法不需要任何关于协同发音的特定规则,例如优势函数[Cohen and Massaro 1993]或语言规则。这些规则隐含在艺术家制作的数据中。为了生成一组易于处理的数据,我们的方法将40个完整的英语音素减少到21个较小的音素集,然后由动画师注释。一旦生成了完整的diphone动画集,它就可以用于多个角色。每个额外的角色都需要一个由八个静态姿势或混合形状组成的小集合。此外,每种语言都需要一套新的双音素,尽管语言之间相似的音素可以共享相同的双音素曲线。我们展示了如何重新使用我们的英语diphone集来适应普通话diphone集。
{"title":"A simple method for high quality artist-driven lip syncing","authors":"Yuyu Xu, Andrew W. Feng, Ari Shapiro","doi":"10.1145/2448196.2448229","DOIUrl":"https://doi.org/10.1145/2448196.2448229","url":null,"abstract":"Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. We present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, runs in real time, and can be personalized for each character. Our method associates animation curves designed by an animator on a fixed set of static facial poses, with sequential pairs of phonemes (diphones), and then stitch the diphones together to create a set of curves for the facial poses. Diphone- and triphone-based methods have been explored in various previous works [Deng et al. 2006], often requiring machine learning. However, our experiments have shown that diphones are sufficient for producing high-quality lip syncing, and that longer sequences of phonemes are not necessary. Our experiments have shown that skilled animators can sufficiently generate the data needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions [Cohen and Massaro 1993] or language rules. Such rules are implicit within the artist-produced data. In order to produce a tractable set of data, our method reduces the full set of 40 English phonemes to a smaller set of 21, which are then annotated by an animator. Once the full diphone set of animations has been generated, it can be reused for multiple characters. Each additional character requires a small set of eight static poses or blendshapes. In addition, each language requires a new set of diphones, although similar phonemes among languages can share the same diphone curves. We show how to reuse our English diphone set to adapt to a Mandarin diphone set.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"3 1","pages":"181"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Filtering color mapped textures and surfaces 过滤颜色映射的纹理和表面
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448217
E. Heitz, D. Nowrouzezahrai, Pierre Poulin, Fabrice Neyret
Color map textures applied directly to surfaces, to geometric microsurface details, or to procedural functions (such as noise), are commonly used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient color map filtering remains an open problem for several reasons: color maps are complex non-linear functions, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on-the-fly, yielding very fast performance. We filter color map textures applied to (macro-scale) surfaces, as well as color maps applied according to (micro-scale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material), is high performance, and has a negligible memory footprint.
直接应用于表面、几何微表面细节或程序函数(如噪声)的彩色贴图纹理通常用于增强视觉细节。它们的简单性和模拟各种现实外观的能力使它们在许多渲染问题中得到采用。与任何纹理或几何细节一样,当在一定距离内观看时,需要适当的滤波来减少混叠,但准确有效的彩色地图滤波仍然是一个开放的问题,原因有几个:彩色地图是复杂的非线性函数,特别是当通过程序噪声和/或几何相关函数进行映射时,透视和掩蔽的效果进一步使像素足迹的滤波复杂化。我们通过实时计算和采样专门的滤波分布,准确地解决了这个问题,产生了非常快的性能。我们过滤应用于(宏观)表面的彩色贴图纹理,以及根据(微观)几何细节应用的彩色贴图。我们使用高斯统计引入了一种新的表示(潜在调制的)彩色地图在像素足迹上的分布,在更复杂的高分辨率彩色映射微表面细节的情况下,我们的过滤是依赖于视图和光线的,并且能够正确处理掩蔽和遮挡效果。我们的结果与地面事实相匹配,我们的解决方案非常适合实时应用程序,只需要几行着色器代码(在补充材料中提供),高性能,并且内存占用很小。
{"title":"Filtering color mapped textures and surfaces","authors":"E. Heitz, D. Nowrouzezahrai, Pierre Poulin, Fabrice Neyret","doi":"10.1145/2448196.2448217","DOIUrl":"https://doi.org/10.1145/2448196.2448217","url":null,"abstract":"Color map textures applied directly to surfaces, to geometric microsurface details, or to procedural functions (such as noise), are commonly used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient color map filtering remains an open problem for several reasons: color maps are complex non-linear functions, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on-the-fly, yielding very fast performance. We filter color map textures applied to (macro-scale) surfaces, as well as color maps applied according to (micro-scale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material), is high performance, and has a negligible memory footprint.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"166 1","pages":"129-136"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77934314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Dynamics based 3D skeletal hand tracking 基于3D手部骨骼跟踪的动力学
Pub Date : 2013-03-21 DOI: 10.1145/2448196.2448232
S. Melax, L. Keselman, Sterling Orsten
Natural human computer interaction motivates hand tracking research, preferably without requiring the user to wear special hardware or markers. Ideally, a hand tracking solution would provide not only points of interest, but the full state of an entire hand. [Oikonomidis et al. 2011] demonstrated a particle swarm optimization that tracked a 3D skeletal hand model from a single depth camera, albeit using significant computing resources. In contrast, we track the hand from a single depth camera using an efficient physical simulation, which incrementally updates a model's fit and explores alternative candidate poses based on a variety of heuristics. Our approach enables real-time, robust 3D skeletal tracking of a user's hand, while utilizing a single x86 CPU core for processing.
自然的人机交互激发了手部跟踪研究,最好不需要用户佩戴特殊的硬件或标记。理想情况下,手部跟踪解决方案不仅可以提供感兴趣的点,还可以提供整个手部的完整状态。[Oikonomidis etal . 2011]演示了一种粒子群优化,该优化可以从单个深度相机跟踪3D骨骼手部模型,尽管使用了大量的计算资源。相比之下,我们使用有效的物理模拟从单个深度相机跟踪手,该模拟增量更新模型的拟合并基于各种启发式方法探索备选姿势。我们的方法可以实现实时,强大的3D骨骼跟踪用户的手,同时利用单个x86 CPU核心进行处理。
{"title":"Dynamics based 3D skeletal hand tracking","authors":"S. Melax, L. Keselman, Sterling Orsten","doi":"10.1145/2448196.2448232","DOIUrl":"https://doi.org/10.1145/2448196.2448232","url":null,"abstract":"Natural human computer interaction motivates hand tracking research, preferably without requiring the user to wear special hardware or markers. Ideally, a hand tracking solution would provide not only points of interest, but the full state of an entire hand. [Oikonomidis et al. 2011] demonstrated a particle swarm optimization that tracked a 3D skeletal hand model from a single depth camera, albeit using significant computing resources. In contrast, we track the hand from a single depth camera using an efficient physical simulation, which incrementally updates a model's fit and explores alternative candidate poses based on a variety of heuristics. Our approach enables real-time, robust 3D skeletal tracking of a user's hand, while utilizing a single x86 CPU core for processing.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"33 1","pages":"184"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78539738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 194
期刊
Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1