Cubic Bézier curves are an integral part of vector graphics. Standard formats such as Adobe Postscript, SVG, Font definitions and PDF describe Path objects as a composition of cubic Bézier curves. Drawing cubic Bézier curves often requires drawing strokes which are less than one device pixel in width. Such strokes, commonly referred to as thin strokes, are very common in creative workflows but rendering them, being computationally expensive, slows down creative content process. Conventionally, thin strokes were rendered with CPU techniques. However, the advent of GPU programming in the last decade or so, has led to development of SIMD techniques suitable for rendering thin strokes on GPUs. These GPU
{"title":"Fast, memory efficient and resolution independent rendering of cubic Bézier curves using tessellation shaders","authors":"Harish Kumar, Anmol Sud","doi":"10.1145/3355056.3364548","DOIUrl":"https://doi.org/10.1145/3355056.3364548","url":null,"abstract":"Cubic Bézier curves are an integral part of vector graphics. Standard formats such as Adobe Postscript, SVG, Font definitions and PDF describe Path objects as a composition of cubic Bézier curves. Drawing cubic Bézier curves often requires drawing strokes which are less than one device pixel in width. Such strokes, commonly referred to as thin strokes, are very common in creative workflows but rendering them, being computationally expensive, slows down creative content process. Conventionally, thin strokes were rendered with CPU techniques. However, the advent of GPU programming in the last decade or so, has led to development of SIMD techniques suitable for rendering thin strokes on GPUs. These GPU","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115624079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni
{"title":"BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries","authors":"Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni","doi":"10.1145/3355056.3364594","DOIUrl":"https://doi.org/10.1145/3355056.3364594","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122118754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.
{"title":"A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer","authors":"Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim","doi":"10.1145/3355056.3364573","DOIUrl":"https://doi.org/10.1145/3355056.3364573","url":null,"abstract":"Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.
{"title":"Computing 3D Clipped Voronoi Diagrams on GPU","authors":"Xiaohan Liu, Dong‐Ming Yan","doi":"10.1145/3355056.3364581","DOIUrl":"https://doi.org/10.1145/3355056.3364581","url":null,"abstract":"Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129994223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.
{"title":"Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network","authors":"Erwin Wu, Florian Perteneder, H. Koike","doi":"10.1145/3355056.3364555","DOIUrl":"https://doi.org/10.1145/3355056.3364555","url":null,"abstract":"Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak
Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.
{"title":"Gamification in a Physical Rehabilitation Setting: Developing a Proprioceptive Training Exercise for a Wrist Robot","authors":"C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak","doi":"10.1145/3355056.3364572","DOIUrl":"https://doi.org/10.1145/3355056.3364572","url":null,"abstract":"Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.
{"title":"Method for estimating display lag in the Oculus Rift S and CV1","authors":"Jason Feng, Juno Kim, Wilson Luu, S. Palmisano","doi":"10.1145/3355056.3364590","DOIUrl":"https://doi.org/10.1145/3355056.3364590","url":null,"abstract":"We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132644673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.
{"title":"Parallel Adaptive Frameless Rendering with NVIDIA OptiX","authors":"Chung-Che Hsiao, Benjamin Watson","doi":"10.1145/3355056.3364569","DOIUrl":"https://doi.org/10.1145/3355056.3364569","url":null,"abstract":"In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132744456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}