High quality global illumination can enhance the visual perception of depth cue and local thickness of volumetric data but it is seldom used in scientific visualization because of its high computational cost. This paper presents a novel grid-based illumination technique which is specially designed and optimized for volume visualization purpose. It supports common light sources and dynamic transfer function editing. Our method models light propagation, including both absorption and scattering, in a volume using a convection-diffusion equation that can be solved numerically. The main advantage of such technique is that the light modeling and simulation can be separated, where we can use a unified partial-differential equation to model various illumination effects, and adopt highly-parallelized grid-based numerical schemes to solve it. Results show that our method can achieve high quality volume illumination with dynamic color and opacity mapping and various light sources in real-time. The added illumination effects can greatly enhance the visual perception of spatial structures of volume data.
{"title":"Fast global illumination for interactive volume visualization","authors":"Yubo Zhang, K. Ma","doi":"10.1145/2448196.2448205","DOIUrl":"https://doi.org/10.1145/2448196.2448205","url":null,"abstract":"High quality global illumination can enhance the visual perception of depth cue and local thickness of volumetric data but it is seldom used in scientific visualization because of its high computational cost. This paper presents a novel grid-based illumination technique which is specially designed and optimized for volume visualization purpose. It supports common light sources and dynamic transfer function editing. Our method models light propagation, including both absorption and scattering, in a volume using a convection-diffusion equation that can be solved numerically. The main advantage of such technique is that the light modeling and simulation can be separated, where we can use a unified partial-differential equation to model various illumination effects, and adopt highly-parallelized grid-based numerical schemes to solve it. Results show that our method can achieve high quality volume illumination with dynamic color and opacity mapping and various light sources in real-time. The added illumination effects can greatly enhance the visual perception of spatial structures of volume data.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"24 1","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83243040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screen-space AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.
{"title":"Multi-view ambient occlusion with importance sampling","authors":"K. Vardis, Georgios Papaioannou, A. Gaitatzes","doi":"10.1145/2448196.2448214","DOIUrl":"https://doi.org/10.1145/2448196.2448214","url":null,"abstract":"Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screen-space AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"36 1","pages":"111-118"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89923771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mubbasir Kapadia, I-Kao Chiang, Tiju Thomas, N. Badler, Joseph T. Kider
There has been a recent paradigm shift in the computer animation industry with an increasing use of pre-recorded motion for animating virtual characters. A fundamental requirement to using motion capture data is an efficient method for indexing and retrieving motions. In this paper, we propose a flexible, efficient method for searching arbitrarily complex motions in large motion databases. Motions are encoded using keys which represent a wide array of structural, geometric and, dynamic features of human motion. Keys provide a representative search space for indexing motions and users can specify sequences of key values as well as multiple combination of key sequences to search for complex motions. We use a trie-based data structure to provide an efficient mapping from key sequences to motions. The search times (even on a single CPU) are very fast, opening the possibility of using large motion data sets in real-time applications.
{"title":"Efficient motion retrieval in large motion databases","authors":"Mubbasir Kapadia, I-Kao Chiang, Tiju Thomas, N. Badler, Joseph T. Kider","doi":"10.1145/2448196.2448199","DOIUrl":"https://doi.org/10.1145/2448196.2448199","url":null,"abstract":"There has been a recent paradigm shift in the computer animation industry with an increasing use of pre-recorded motion for animating virtual characters. A fundamental requirement to using motion capture data is an efficient method for indexing and retrieving motions. In this paper, we propose a flexible, efficient method for searching arbitrarily complex motions in large motion databases. Motions are encoded using keys which represent a wide array of structural, geometric and, dynamic features of human motion. Keys provide a representative search space for indexing motions and users can specify sequences of key values as well as multiple combination of key sequences to search for complex motions. We use a trie-based data structure to provide an efficient mapping from key sequences to motions. The search times (even on a single CPU) are very fast, opening the possibility of using large motion data sets in real-time applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"29 1","pages":"19-28"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88517415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forest Pro makes you can create large area trees, bushes and other models in a short time. If you want to create 50000 trees in a scene, with each tree has more than 10 thousands multilateral type, even if the computer can drag, it may get stuck in the rendering time. Now create tens of thousands of trees through Forest Pro is easy, but also it won't get stuck in the rendering process, this is a big edge for the animators who often make outdoor landscape. Made by IToo company. As is shown in figure 1.
{"title":"Creating a large area of trees based on FOREST PRO","authors":"Jincan Lin, Yun-Wen Huang, Junfeng Yao","doi":"10.1145/2448196.2448230","DOIUrl":"https://doi.org/10.1145/2448196.2448230","url":null,"abstract":"Forest Pro makes you can create large area trees, bushes and other models in a short time. If you want to create 50000 trees in a scene, with each tree has more than 10 thousands multilateral type, even if the computer can drag, it may get stuck in the rendering time. Now create tens of thousands of trees through Forest Pro is easy, but also it won't get stuck in the rendering process, this is a big edge for the animators who often make outdoor landscape. Made by IToo company. As is shown in figure 1.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"11 2 1","pages":"182"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84079713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingcen Gao, Thanh-Tung Cao, T. Tan, Zhiyong Huang
Flipping is a local and efficient operation to construct the convex hull in an incremental fashion. However, it is known that the traditional flip algorithm is not able to compute the convex hull when applied to a polyhedron in R3. Our novel Flip-Flop algorithm is a variant of the flip algorithm. It overcomes the deficiency of the traditional one to always compute the convex hull of a given star-shaped polyhedron with provable correctness. Applying this to construct convex hull of a point set in R3, we develop ffHull, a flip algorithm that allows nonrestrictive insertion of many vertices before any flipping of edges. This is unlike the well-known incremental fashion of strictly alternating between inserting a single vertex and flipping. The new approach is not only simpler and more efficient for CPU implementation but also maps well to the massively parallel nature of the modern GPU. As shown in our experiments, ffHull running on the CPU is as fast as the best-known convex hull implementation, qHull. As for the GPU, ffHull also outperforms all known prior work. From this, we further obtain the first known solution to computing the 2D regular triangulation on the GPU.
{"title":"Flip-flop: convex hull construction via star-shaped polyhedron in 3D","authors":"Mingcen Gao, Thanh-Tung Cao, T. Tan, Zhiyong Huang","doi":"10.1145/2448196.2448203","DOIUrl":"https://doi.org/10.1145/2448196.2448203","url":null,"abstract":"Flipping is a local and efficient operation to construct the convex hull in an incremental fashion. However, it is known that the traditional flip algorithm is not able to compute the convex hull when applied to a polyhedron in R3. Our novel Flip-Flop algorithm is a variant of the flip algorithm. It overcomes the deficiency of the traditional one to always compute the convex hull of a given star-shaped polyhedron with provable correctness. Applying this to construct convex hull of a point set in R3, we develop ffHull, a flip algorithm that allows nonrestrictive insertion of many vertices before any flipping of edges. This is unlike the well-known incremental fashion of strictly alternating between inserting a single vertex and flipping. The new approach is not only simpler and more efficient for CPU implementation but also maps well to the massively parallel nature of the modern GPU. As shown in our experiments, ffHull running on the CPU is as fast as the best-known convex hull implementation, qHull. As for the GPU, ffHull also outperforms all known prior work. From this, we further obtain the first known solution to computing the 2D regular triangulation on the GPU.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"53 1","pages":"45-54"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85714295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unlike most graphics systems, a shared, user-generated virtual world is created on-the-fly by end users rather than professional artists. Objects in the world can come and go, and the world can be composed of so many models and textures that it cannot be stored locally on disk. The content must be stored in a shared, networked resource such as the cloud and delivered to clients dynamically.
{"title":"Displaying large user-generated virtual worlds from the cloud","authors":"T. Azim, Ewen Cheslack-Postava, P. Levis","doi":"10.1145/2448196.2448231","DOIUrl":"https://doi.org/10.1145/2448196.2448231","url":null,"abstract":"Unlike most graphics systems, a shared, user-generated virtual world is created on-the-fly by end users rather than professional artists. Objects in the world can come and go, and the world can be composed of so many models and textures that it cannot be stored locally on disk. The content must be stored in a shared, networked resource such as the cloud and delivered to clients dynamically.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2 1","pages":"183"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81785950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Witawat Rungjiratananon, Yoshihiro Kanamori, T. Nishita
In hair simulation, hair collision handling plays an important role to make hair look realistic; hair collision maintains the volume of hair. Without hair collision, hair would appear unnaturally flat.
{"title":"Fast hair collision handling using slice planes","authors":"Witawat Rungjiratananon, Yoshihiro Kanamori, T. Nishita","doi":"10.1145/2448196.2448233","DOIUrl":"https://doi.org/10.1145/2448196.2448233","url":null,"abstract":"In hair simulation, hair collision handling plays an important role to make hair look realistic; hair collision maintains the volume of hair. Without hair collision, hair would appear unnaturally flat.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"63 1","pages":"185"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82607758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. We present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, runs in real time, and can be personalized for each character. Our method associates animation curves designed by an animator on a fixed set of static facial poses, with sequential pairs of phonemes (diphones), and then stitch the diphones together to create a set of curves for the facial poses. Diphone- and triphone-based methods have been explored in various previous works [Deng et al. 2006], often requiring machine learning. However, our experiments have shown that diphones are sufficient for producing high-quality lip syncing, and that longer sequences of phonemes are not necessary. Our experiments have shown that skilled animators can sufficiently generate the data needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions [Cohen and Massaro 1993] or language rules. Such rules are implicit within the artist-produced data. In order to produce a tractable set of data, our method reduces the full set of 40 English phonemes to a smaller set of 21, which are then annotated by an animator. Once the full diphone set of animations has been generated, it can be reused for multiple characters. Each additional character requires a small set of eight static poses or blendshapes. In addition, each language requires a new set of diphones, although similar phonemes among languages can share the same diphone curves. We show how to reuse our English diphone set to adapt to a Mandarin diphone set.
将嘴唇和嘴巴的动作与动画自然同步是令人信服的3D角色表演的重要组成部分。我们提出了一个简单,便携和可编辑的唇同步方法,适用于多种语言,不需要机器学习,可以由熟练的动画师构建,实时运行,并且可以为每个角色个性化。我们的方法将动画师在一组固定的静态面部姿势上设计的动画曲线与连续的音素对(双音素)相关联,然后将这些双音素拼接在一起,为面部姿势创建一组曲线。在以前的各种工作中已经探索了基于Diphone和triphone的方法[Deng et al. 2006],通常需要机器学习。然而,我们的实验表明,双音器足以产生高质量的口型,而更长的音素序列是不必要的。我们的实验表明,熟练的动画师可以充分生成高质量结果所需的数据。因此,我们的算法不需要任何关于协同发音的特定规则,例如优势函数[Cohen and Massaro 1993]或语言规则。这些规则隐含在艺术家制作的数据中。为了生成一组易于处理的数据,我们的方法将40个完整的英语音素减少到21个较小的音素集,然后由动画师注释。一旦生成了完整的diphone动画集,它就可以用于多个角色。每个额外的角色都需要一个由八个静态姿势或混合形状组成的小集合。此外,每种语言都需要一套新的双音素,尽管语言之间相似的音素可以共享相同的双音素曲线。我们展示了如何重新使用我们的英语diphone集来适应普通话diphone集。
{"title":"A simple method for high quality artist-driven lip syncing","authors":"Yuyu Xu, Andrew W. Feng, Ari Shapiro","doi":"10.1145/2448196.2448229","DOIUrl":"https://doi.org/10.1145/2448196.2448229","url":null,"abstract":"Synchronizing the lip and mouth movements naturally along with animation is an important part of convincing 3D character performance. We present a simple, portable and editable lip-synchronization method that works for multiple languages, requires no machine learning, can be constructed by a skilled animator, runs in real time, and can be personalized for each character. Our method associates animation curves designed by an animator on a fixed set of static facial poses, with sequential pairs of phonemes (diphones), and then stitch the diphones together to create a set of curves for the facial poses. Diphone- and triphone-based methods have been explored in various previous works [Deng et al. 2006], often requiring machine learning. However, our experiments have shown that diphones are sufficient for producing high-quality lip syncing, and that longer sequences of phonemes are not necessary. Our experiments have shown that skilled animators can sufficiently generate the data needed for good quality results. Thus our algorithm does not need any specific rules about coarticulation, such as dominance functions [Cohen and Massaro 1993] or language rules. Such rules are implicit within the artist-produced data. In order to produce a tractable set of data, our method reduces the full set of 40 English phonemes to a smaller set of 21, which are then annotated by an animator. Once the full diphone set of animations has been generated, it can be reused for multiple characters. Each additional character requires a small set of eight static poses or blendshapes. In addition, each language requires a new set of diphones, although similar phonemes among languages can share the same diphone curves. We show how to reuse our English diphone set to adapt to a Mandarin diphone set.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"3 1","pages":"181"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Heitz, D. Nowrouzezahrai, Pierre Poulin, Fabrice Neyret
Color map textures applied directly to surfaces, to geometric microsurface details, or to procedural functions (such as noise), are commonly used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient color map filtering remains an open problem for several reasons: color maps are complex non-linear functions, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on-the-fly, yielding very fast performance. We filter color map textures applied to (macro-scale) surfaces, as well as color maps applied according to (micro-scale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material), is high performance, and has a negligible memory footprint.
{"title":"Filtering color mapped textures and surfaces","authors":"E. Heitz, D. Nowrouzezahrai, Pierre Poulin, Fabrice Neyret","doi":"10.1145/2448196.2448217","DOIUrl":"https://doi.org/10.1145/2448196.2448217","url":null,"abstract":"Color map textures applied directly to surfaces, to geometric microsurface details, or to procedural functions (such as noise), are commonly used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient color map filtering remains an open problem for several reasons: color maps are complex non-linear functions, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on-the-fly, yielding very fast performance. We filter color map textures applied to (macro-scale) surfaces, as well as color maps applied according to (micro-scale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material), is high performance, and has a negligible memory footprint.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"166 1","pages":"129-136"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77934314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural human computer interaction motivates hand tracking research, preferably without requiring the user to wear special hardware or markers. Ideally, a hand tracking solution would provide not only points of interest, but the full state of an entire hand. [Oikonomidis et al. 2011] demonstrated a particle swarm optimization that tracked a 3D skeletal hand model from a single depth camera, albeit using significant computing resources. In contrast, we track the hand from a single depth camera using an efficient physical simulation, which incrementally updates a model's fit and explores alternative candidate poses based on a variety of heuristics. Our approach enables real-time, robust 3D skeletal tracking of a user's hand, while utilizing a single x86 CPU core for processing.
{"title":"Dynamics based 3D skeletal hand tracking","authors":"S. Melax, L. Keselman, Sterling Orsten","doi":"10.1145/2448196.2448232","DOIUrl":"https://doi.org/10.1145/2448196.2448232","url":null,"abstract":"Natural human computer interaction motivates hand tracking research, preferably without requiring the user to wear special hardware or markers. Ideally, a hand tracking solution would provide not only points of interest, but the full state of an entire hand. [Oikonomidis et al. 2011] demonstrated a particle swarm optimization that tracked a 3D skeletal hand model from a single depth camera, albeit using significant computing resources. In contrast, we track the hand from a single depth camera using an efficient physical simulation, which incrementally updates a model's fit and explores alternative candidate poses based on a variety of heuristics. Our approach enables real-time, robust 3D skeletal tracking of a user's hand, while utilizing a single x86 CPU core for processing.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"33 1","pages":"184"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78539738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}