Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D. Twigg, Peizhao Zhang, Jeff Petkau, Tsz-Ho Yu, Chun-Jung Tai, Muzaffer Akbay, Z. Wang, Asaf Nitzan, Gang Dong, Yuting Ye, Lingling Tao, Chengde Wan, Robert Wang
We present a system for real-time hand-tracking to drive virtual and augmented reality (VR/AR) experiences. Using four fisheye monochrome cameras, our system generates accurate and low-jitter 3D hand motion across a large working volume for a diverse set of users. We achieve this by proposing neural network architectures for detecting hands and estimating hand keypoint locations. Our hand detection network robustly handles a variety of real world environments. The keypoint estimation network leverages tracking history to produce spatially and temporally consistent poses. We design scalable, semi-automated mechanisms to collect a large and diverse set of ground truth data using a combination of manual annotation and automated tracking. Additionally, we introduce a detection-by-tracking method that increases smoothness while reducing the computational cost; the optimized system runs at 60Hz on PC and 30Hz on a mobile processor. Together, these contributions yield a practical system for capturing a user’s hands and is the default feature on the Oculus Quest VR headset powering input and social presence.
{"title":"MEgATrack: monochrome egocentric articulated hand-tracking for virtual reality","authors":"Shangchen Han, Beibei Liu, Randi Cabezas, Christopher D. Twigg, Peizhao Zhang, Jeff Petkau, Tsz-Ho Yu, Chun-Jung Tai, Muzaffer Akbay, Z. Wang, Asaf Nitzan, Gang Dong, Yuting Ye, Lingling Tao, Chengde Wan, Robert Wang","doi":"10.1145/3386569.3392452","DOIUrl":"https://doi.org/10.1145/3386569.3392452","url":null,"abstract":"We present a system for real-time hand-tracking to drive virtual and augmented reality (VR/AR) experiences. Using four fisheye monochrome cameras, our system generates accurate and low-jitter 3D hand motion across a large working volume for a diverse set of users. We achieve this by proposing neural network architectures for detecting hands and estimating hand keypoint locations. Our hand detection network robustly handles a variety of real world environments. The keypoint estimation network leverages tracking history to produce spatially and temporally consistent poses. We design scalable, semi-automated mechanisms to collect a large and diverse set of ground truth data using a combination of manual annotation and automated tracking. Additionally, we introduce a detection-by-tracking method that increases smoothness while reducing the computational cost; the optimized system runs at 60Hz on PC and 30Hz on a mobile processor. Together, these contributions yield a practical system for capturing a user’s hands and is the default feature on the Oculus Quest VR headset powering input and social presence.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"27 1","pages":"87"},"PeriodicalIF":0.0,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74826439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shi-Hong Liu, Pai-Chien Yen, Yi-Hsuan Mao, Yu-Hsin Lin, E. Chandra, Mike Y. Chen
We present HeadBlaster, a novel wearable technology that creates motion perception by applying ungrounded force to the head to stimulate the vestibular and proprioception sensory systems. Compared to motion platforms that tilt the body, HeadBlaster more closely approximates how lateral inertial and centrifugal forces are felt during real motion to provide more persistent motion perception. In addition, because HeadBlaster only actuates the head rather than the entire body, it eliminates the mechanical motion platforms that users must be constrained to, which improves user mobility and enables room-scale VR experiences. We designed a wearable HeadBlaster system with 6 air nozzles integrated into a VR headset, using compressed air jets to provide persistent, lateral propulsion forces. By controlling multiple air jets, it is able to create the perception of lateral acceleration in 360 degrees. We conducted a series of perception and human-factor studies to quantify the head movement, the persistence of perceived acceleration, and the minimal level of detectable forces. We then explored the user experience of HeadBlaster through two VR applications: a custom surfing game, and a commercial driving simulator together with a commercial motion platform. Study results showed that HeadBlaster provided significantly longer perceived duration of acceleration than motion platforms. It also significantly improved realism and immersion, and was preferred by users compared to using VR alone. In addition, it can be used in conjunction with motion platforms to further augment the user experience.
{"title":"HeadBlaster: a wearable approach to simulating motion perception using head-mounted air propulsion jets","authors":"Shi-Hong Liu, Pai-Chien Yen, Yi-Hsuan Mao, Yu-Hsin Lin, E. Chandra, Mike Y. Chen","doi":"10.1145/3386569.3392482","DOIUrl":"https://doi.org/10.1145/3386569.3392482","url":null,"abstract":"We present HeadBlaster, a novel wearable technology that creates motion perception by applying ungrounded force to the head to stimulate the vestibular and proprioception sensory systems. Compared to motion platforms that tilt the body, HeadBlaster more closely approximates how lateral inertial and centrifugal forces are felt during real motion to provide more persistent motion perception. In addition, because HeadBlaster only actuates the head rather than the entire body, it eliminates the mechanical motion platforms that users must be constrained to, which improves user mobility and enables room-scale VR experiences. We designed a wearable HeadBlaster system with 6 air nozzles integrated into a VR headset, using compressed air jets to provide persistent, lateral propulsion forces. By controlling multiple air jets, it is able to create the perception of lateral acceleration in 360 degrees. We conducted a series of perception and human-factor studies to quantify the head movement, the persistence of perceived acceleration, and the minimal level of detectable forces. We then explored the user experience of HeadBlaster through two VR applications: a custom surfing game, and a commercial driving simulator together with a commercial motion platform. Study results showed that HeadBlaster provided significantly longer perceived duration of acceleration than motion platforms. It also significantly improved realism and immersion, and was preferred by users compared to using VR alone. In addition, it can be used in conjunction with motion platforms to further augment the user experience.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"24 1","pages":"84"},"PeriodicalIF":0.0,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74275768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic fracture surrounds us in our day-to-day lives, but animating this phenomenon is notoriously difficult and only further complicated by anisotropic materials---those with underlying structures that dictate preferred fracture directions. Thus, we present AnisoMPM: a robust and general approach for animating the dynamic fracture of isotropic, transversely isotropic, and orthotropic materials. AnisoMPM has three core components: a technique for anisotropic damage evolution, methods for anisotropic elastic response, and a coupling approach. For anisotropic damage, we adopt a non-local continuum damage mechanics (CDM) geometric approach to crack modeling and augment this with structural tensors to encode material anisotropy. Furthermore, we discretize our damage evolution with explicit and implicit integration, giving a high degree of computational efficiency and flexibility. We also utilize a QR-decomposition based anisotropic constitutive model that is inversion safe, more efficient than SVD models, easy to implement, robust to extreme deformations, and that captures all aforementioned modes of anisotropy. Our elasto-damage coupling is enforced through an additive decomposition of our hyperelasticity into a tensile and compressive component in which damage is used to degrade the tensile contribution to allow for material separation. For extremely stiff fibered materials, we further introduce a novel Galerkin weak form discretization that enables embedded directional inextensibility. We present this as a hard-constrained grid velocity solve that poses an alternative to our anisotropic elasticity that is locking-free and can model very stiff materials.
{"title":"AnisoMPM: animating anisotropic damage mechanics","authors":"Joshuah Wolper, Yunuo Chen, Minchen Li, Yu Fang, Ziyin Qu, Jiecong Lu, Meggie Cheng, Chenfanfu Jiang","doi":"10.1145/3386569.3392428","DOIUrl":"https://doi.org/10.1145/3386569.3392428","url":null,"abstract":"Dynamic fracture surrounds us in our day-to-day lives, but animating this phenomenon is notoriously difficult and only further complicated by anisotropic materials---those with underlying structures that dictate preferred fracture directions. Thus, we present AnisoMPM: a robust and general approach for animating the dynamic fracture of isotropic, transversely isotropic, and orthotropic materials. AnisoMPM has three core components: a technique for anisotropic damage evolution, methods for anisotropic elastic response, and a coupling approach. For anisotropic damage, we adopt a non-local continuum damage mechanics (CDM) geometric approach to crack modeling and augment this with structural tensors to encode material anisotropy. Furthermore, we discretize our damage evolution with explicit and implicit integration, giving a high degree of computational efficiency and flexibility. We also utilize a QR-decomposition based anisotropic constitutive model that is inversion safe, more efficient than SVD models, easy to implement, robust to extreme deformations, and that captures all aforementioned modes of anisotropy. Our elasto-damage coupling is enforced through an additive decomposition of our hyperelasticity into a tensile and compressive component in which damage is used to degrade the tensile contribution to allow for material separation. For extremely stiff fibered materials, we further introduce a novel Galerkin weak form discretization that enables embedded directional inextensibility. We present this as a hard-constrained grid velocity solve that poses an alternative to our anisotropic elasticity that is locking-free and can model very stiff materials.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"87 1","pages":"37"},"PeriodicalIF":0.0,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72930617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a fully automatic method for shape correspondence that is widely applicable, and especially effective for non isometric shapes and shapes of different topology. We observe that fully-automatic shape correspondence can be decomposed as a hybrid discrete/continuous optimization problem, and we find the best sparse landmark correspondence, whose sparse-to-dense extension minimizes a local metric distortion. To tackle the combinatorial task of landmark correspondence we use an evolutionary genetic algorithm, where the local distortion of the sparse-to-dense extension is used as the objective function. We design novel geometrically guided genetic operators, which, when combined with our objective, are highly effective for non isometric shape matching. Our method outperforms state of the art methods for automatic shape correspondence both quantitatively and qualitatively on challenging datasets.
{"title":"ENIGMA: evolutionary non-isometric geometry MAtching","authors":"M. Edelstein, Danielle Ezuz, M. Ben-Chen","doi":"10.1145/3386569.3392447","DOIUrl":"https://doi.org/10.1145/3386569.3392447","url":null,"abstract":"In this paper we propose a fully automatic method for shape correspondence that is widely applicable, and especially effective for non isometric shapes and shapes of different topology. We observe that fully-automatic shape correspondence can be decomposed as a hybrid discrete/continuous optimization problem, and we find the best sparse landmark correspondence, whose sparse-to-dense extension minimizes a local metric distortion. To tackle the combinatorial task of landmark correspondence we use an evolutionary genetic algorithm, where the local distortion of the sparse-to-dense extension is used as the objective function. We design novel geometrically guided genetic operators, which, when combined with our objective, are highly effective for non isometric shape matching. Our method outperforms state of the art methods for automatic shape correspondence both quantitatively and qualitatively on challenging datasets.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"15 1","pages":"112"},"PeriodicalIF":0.0,"publicationDate":"2019-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87863031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose two novel contributions for measurement-based rendering of diffraction effects in surface reflectance of planar homogeneous diffractive materials. As a general solution for commonly manufactured materials, we propose a practical data-driven rendering technique and a measurement approach to efficiently render complex diffraction effects in real time. Our measurement step simply involves photographing a planar diffractive sample illuminated with an LED flash. Here, we directly record the resultant diffraction pattern on the sample surface due to a narrow-band point source illumination. Furthermore, we propose an efficient rendering method that exploits the measurement in conjunction with the Huygens-Fresnel principle to fit relevant diffraction parameters based on a first-order approximation. Our proposed data-driven rendering method requires the precomputation of a single diffraction look-up table for accurate spectral rendering of complex diffraction effects. Second, for sharp specular samples, we propose a novel method for practical measurement of the underlying diffraction grating using out-of-focus “bokeh” photography of the specular highlight. We demonstrate how the measured bokeh can be employed as a height field to drive a diffraction shader based on a first-order approximation for efficient real-time rendering. Finally, we also drive analytic solutions for a few special cases of diffraction from our measurements and demonstrate realistic rendering results under complex light sources and environments.
{"title":"Practical acquisition and rendering of diffraction effects in surface reflectance","authors":"Antoine Toisoul, A. Ghosh","doi":"10.1145/3072959.3126805","DOIUrl":"https://doi.org/10.1145/3072959.3126805","url":null,"abstract":"We propose two novel contributions for measurement-based rendering of diffraction effects in surface reflectance of planar homogeneous diffractive materials. As a general solution for commonly manufactured materials, we propose a practical data-driven rendering technique and a measurement approach to efficiently render complex diffraction effects in real time. Our measurement step simply involves photographing a planar diffractive sample illuminated with an LED flash. Here, we directly record the resultant diffraction pattern on the sample surface due to a narrow-band point source illumination. Furthermore, we propose an efficient rendering method that exploits the measurement in conjunction with the Huygens-Fresnel principle to fit relevant diffraction parameters based on a first-order approximation. Our proposed data-driven rendering method requires the precomputation of a single diffraction look-up table for accurate spectral rendering of complex diffraction effects. Second, for sharp specular samples, we propose a novel method for practical measurement of the underlying diffraction grating using out-of-focus “bokeh” photography of the specular highlight. We demonstrate how the measured bokeh can be employed as a height field to drive a diffraction shader based on a first-order approximation for efficient real-time rendering. Finally, we also drive analytic solutions for a few special cases of diffraction from our measurements and demonstrate realistic rendering results under complex light sources and environments.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75829866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixin Shu, Sunil Hadap, Eli Shechtman, Kalyan Sunkavalli, Sylvain Paris, D. Samaras
Lighting is a critical element of portrait photography. However, good lighting design typically requires complex equipment and significant time and expertise. Our work simplifies this task using a relighting technique that transfers the desired illumination of one portrait onto another. The novelty in our approach to this challenging problem is our formulation of relighting as a mass transport problem. We start from standard color histogram matching that only captures the overall tone of the illumination, and we show how to use the mass-transport formulation to make it dependent on facial geometry. We fit a three-dimensional (3D) morphable face model to the portrait, and for each pixel, we combine the color value with the corresponding 3D position and normal. We then solve a mass-transport problem in this augmented space to generate a color remapping that achieves localized, geometry-aware relighting. Our technique is robust to variations in facial appearance and small errors in face reconstruction. As we demonstrate, this allows our technique to handle a variety of portraits and illumination conditions, including scenarios that are challenging for previous methods.
{"title":"Portrait lighting transfer using a mass transport approach","authors":"Zhixin Shu, Sunil Hadap, Eli Shechtman, Kalyan Sunkavalli, Sylvain Paris, D. Samaras","doi":"10.1145/3072959.3126847","DOIUrl":"https://doi.org/10.1145/3072959.3126847","url":null,"abstract":"Lighting is a critical element of portrait photography. However, good lighting design typically requires complex equipment and significant time and expertise. Our work simplifies this task using a relighting technique that transfers the desired illumination of one portrait onto another. The novelty in our approach to this challenging problem is our formulation of relighting as a mass transport problem. We start from standard color histogram matching that only captures the overall tone of the illumination, and we show how to use the mass-transport formulation to make it dependent on facial geometry. We fit a three-dimensional (3D) morphable face model to the portrait, and for each pixel, we combine the color value with the corresponding 3D position and normal. We then solve a mass-transport problem in this augmented space to generate a color remapping that achieves localized, geometry-aware relighting. Our technique is robust to variations in facial appearance and small errors in face reconstruction. As we demonstrate, this allows our technique to handle a variety of portraits and illumination conditions, including scenarios that are challenging for previous methods.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87744171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithmically efficient and parallelized domain decomposition based approach to solving Poisson’s equation on irregular domains. Our technique employs the Schur complement method, which permits a high degree of parallel efficiency on multicore systems. We create a novel Schur complement preconditioner which achieves faster convergence, and requires less computation time and memory. This domain decomposition method allows us to apply different linear solvers for different regions of the flow. Subdomains with regular boundaries can be solved with an FFT-based Fast Poisson Solver. We can solve systems with 1,0243 degrees of freedom, and demonstrate its use for the pressure projection step of incompressible liquid and gas simulations. The results demonstrate considerable speedup over preconditioned conjugate gradient methods commonly employed to solve such problems, including a multigrid preconditioned conjugate gradient method.
{"title":"A schur complement preconditioner for scalable parallel fluid simulation","authors":"Jieyu Chu, Nafees Bin Zafar, Xubo Yang","doi":"10.1145/3072959.3126843","DOIUrl":"https://doi.org/10.1145/3072959.3126843","url":null,"abstract":"We present an algorithmically efficient and parallelized domain decomposition based approach to solving Poisson’s equation on irregular domains. Our technique employs the Schur complement method, which permits a high degree of parallel efficiency on multicore systems. We create a novel Schur complement preconditioner which achieves faster convergence, and requires less computation time and memory. This domain decomposition method allows us to apply different linear solvers for different regions of the flow. Subdomains with regular boundaries can be solved with an FFT-based Fast Poisson Solver. We can solve systems with 1,0243 degrees of freedom, and demonstrate its use for the pressure projection step of incompressible liquid and gas simulations. The results demonstrate considerable speedup over preconditioned conjugate gradient methods commonly employed to solve such problems, including a multigrid preconditioned conjugate gradient method.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86524593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel algorithm to control the physically-based animation of smoke. Given a set of keyframe smoke shapes, we compute a dense sequence of control force fields that can drive the smoke shape to match several keyframes at certain time instances. Our approach formulates this control problem as a spacetime optimization constrained by partial differential equations. In order to compute the locally optimal control forces, we alternatively optimize the velocity fields and density fields using an alternating direction method of multiplier (ADMM) optimizer. In order to reduce the high complexity of multiple passes of fluid resimulation during velocity field optimization, we utilize the coherence between consecutive fluid simulation passes. We demonstrate the benefits of our approach by computing accurate solutions on 2D and 3D benchmarks. In practice, we observe up to an order of magnitude improvement over prior optimal control methods.
{"title":"Efficient solver for spacetime control of smoke","authors":"Zherong Pan, Dinesh Manocha","doi":"10.1145/3072959.3126807","DOIUrl":"https://doi.org/10.1145/3072959.3126807","url":null,"abstract":"We present a novel algorithm to control the physically-based animation of smoke. Given a set of keyframe smoke shapes, we compute a dense sequence of control force fields that can drive the smoke shape to match several keyframes at certain time instances. Our approach formulates this control problem as a spacetime optimization constrained by partial differential equations. In order to compute the locally optimal control forces, we alternatively optimize the velocity fields and density fields using an alternating direction method of multiplier (ADMM) optimizer. In order to reduce the high complexity of multiple passes of fluid resimulation during velocity field optimization, we utilize the coherence between consecutive fluid simulation passes. We demonstrate the benefits of our approach by computing accurate solutions on 2D and 3D benchmarks. In practice, we observe up to an order of magnitude improvement over prior optimal control methods.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74098819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we present a multi-class blue noise sampling algorithm by throwing samples as the constrained Wasserstein barycenter of multiple density distributions. Using an entropic regularization term, a constrained transport plan in the optimal transport problem is provided to break the partition required by the previous Capacity-Constrained Voronoi Tessellation method. The entropic regularization term cannot only control spatial regularity of blue noise sampling, but it also reduces conflicts between the desired centroids of Vornoi cells for multi-class sampling. Moreover, the adaptive blue noise property is guaranteed for each individual class, as well as their combined class. Our method can be easily extended to multi-class sampling on a point set surface. We also demonstrate applications in object distribution and color stippling.
{"title":"Wasserstein blue noise sampling","authors":"Hongxing Qin, Yi Chen, Jinlong He, Baoquan Chen","doi":"10.1145/3072959.3126841","DOIUrl":"https://doi.org/10.1145/3072959.3126841","url":null,"abstract":"In this article, we present a multi-class blue noise sampling algorithm by throwing samples as the constrained Wasserstein barycenter of multiple density distributions. Using an entropic regularization term, a constrained transport plan in the optimal transport problem is provided to break the partition required by the previous Capacity-Constrained Voronoi Tessellation method. The entropic regularization term cannot only control spatial regularity of blue noise sampling, but it also reduces conflicts between the desired centroids of Vornoi cells for multi-class sampling. Moreover, the adaptive blue noise property is guaranteed for each individual class, as well as their combined class. Our method can be easily extended to multi-class sampling on a point set surface. We also demonstrate applications in object distribution and color stippling.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86098050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}