首页 > 最新文献

SIGGRAPH Asia 2019 Posters最新文献

英文 中文
Fast, memory efficient and resolution independent rendering of cubic Bézier curves using tessellation shaders 快速,内存效率和分辨率独立的立方bsamzier曲线渲染使用镶嵌着色器
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364548
Harish Kumar, Anmol Sud
Cubic Bézier curves are an integral part of vector graphics. Standard formats such as Adobe Postscript, SVG, Font definitions and PDF describe Path objects as a composition of cubic Bézier curves. Drawing cubic Bézier curves often requires drawing strokes which are less than one device pixel in width. Such strokes, commonly referred to as thin strokes, are very common in creative workflows but rendering them, being computationally expensive, slows down creative content process. Conventionally, thin strokes were rendered with CPU techniques. However, the advent of GPU programming in the last decade or so, has led to development of SIMD techniques suitable for rendering thin strokes on GPUs. These GPU
三次bsamizier曲线是矢量图形的一个组成部分。Adobe Postscript、SVG、字体定义和PDF等标准格式将Path对象描述为三次bsamzier曲线的组合。绘制立方bsamzier曲线通常需要绘制宽度小于一个设备像素的笔画。这种笔画,通常被称为细笔画,在创造性工作流程中非常常见,但是渲染它们,由于计算成本高,减慢了创造性内容的处理速度。通常,细笔画是用CPU技术渲染的。然而,在过去十年左右GPU编程的出现,导致了适合在GPU上渲染细笔画的SIMD技术的发展。这些GPU
{"title":"Fast, memory efficient and resolution independent rendering of cubic Bézier curves using tessellation shaders","authors":"Harish Kumar, Anmol Sud","doi":"10.1145/3355056.3364548","DOIUrl":"https://doi.org/10.1145/3355056.3364548","url":null,"abstract":"Cubic Bézier curves are an integral part of vector graphics. Standard formats such as Adobe Postscript, SVG, Font definitions and PDF describe Path objects as a composition of cubic Bézier curves. Drawing cubic Bézier curves often requires drawing strokes which are less than one device pixel in width. Such strokes, commonly referred to as thin strokes, are very common in creative workflows but rendering them, being computationally expensive, slows down creative content process. Conventionally, thin strokes were rendered with CPU techniques. However, the advent of GPU programming in the last decade or so, has led to development of SIMD techniques suitable for rendering thin strokes on GPUs. These GPU","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115624079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries BookVIS:增强书店和图书馆的浏览体验
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364594
Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni
{"title":"BookVIS: Enhancing Browsing Experiences in Bookstores and Libraries","authors":"Zona Kostic, Nathan Weeks, Johann Philipp Dreessen, Jelena Dowey, Jeffrey Baglioni","doi":"10.1145/3355056.3364594","DOIUrl":"https://doi.org/10.1145/3355056.3364594","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122118754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer 一种利用3D打印机制作假体化妆伤口模具的方法
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364573
Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim
Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.
传统制作伤口道具的方法是先用油泥雕刻出伤口雕塑,在成品上浇上硅或石膏制成伤口模具,然后将硅胶倒入伤口模具中完成伤口道具。这种传统的方法需要花费大量的时间和精力来学习如何处理油粘土或硅等材料,或者获得伤口雕刻技术。最近,许多用户尝试使用3D建模软件和3D打印机创建伤口模具,但非专家很难进行3D伤口建模或3D模型转换等任务。本文提出了一种简单快速的方法,使用户可以从伤口图像中创建伤口模具模型,并使用3D打印机打印出来。我们的方法为伤口模具生产提供了易于使用的功能,因此不熟悉3D建模的化妆师可以使用该软件轻松创建模具。
{"title":"A Method of Making Wound Molds for Prosthetic Makeup using 3D Printer","authors":"Yoon-Seok Choi, Soonchul Jung, Jin-Seo Kim","doi":"10.1145/3355056.3364573","DOIUrl":"https://doi.org/10.1145/3355056.3364573","url":null,"abstract":"Conventionally to make wound props, firstly artists carves wound sculpture using oil clay and makes the wound mold by pouring silicon or plaster over the finished sculpture then pours silicone into the wound mold to finish the wound props. This conventional approach takes a lot of time and effort to learn how to handle materials such as oil clay or silicon, or to acquire wound sculpting techniques. Recently, many users are trying to create wound molds using 3D modeling software and 3D printers but it is difficult for non-experts to conduct tasks such as 3D wound modeling or 3D model transformation for 3D printers. This paper suggests a simple and rapid way for users to create a wound mold model from a wound image and to print one using a 3D printer. Our method provides an easy-to-use capabilities for wound molds production so that the makeup artists who are not familiar with 3D modeling can easily create the molds using the software.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computing 3D Clipped Voronoi Diagrams on GPU 在GPU上计算三维剪切Voronoi图
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364581
Xiaohan Liu, Dong‐Ming Yan
Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.
在三维体中计算剪切Voronoi图是一个具有挑战性的问题。在这张海报中,我们提出了一个高效的GPU实现来解决这个问题。通过将三维体离散成一个四面体网格,我们的方法的主要思想是我们使用每个四面体的四个平面(以下简称为tet)来剪切Voronoi细胞,而不是像以前的方法那样使用Voronoi细胞的平分平面来剪切测试。这种策略大大降低了计算复杂度。我们的方法比最先进的CPU方法高出一个数量级。
{"title":"Computing 3D Clipped Voronoi Diagrams on GPU","authors":"Xiaohan Liu, Dong‐Ming Yan","doi":"10.1145/3355056.3364581","DOIUrl":"https://doi.org/10.1145/3355056.3364581","url":null,"abstract":"Computing clipped Voronoi diagrams in 3D volume is a challenging problem. In this poster, we propose an efficient GPU implementation to tackle this problem. By discretizing the 3D volume into a tetrahedral mesh, the main idea of our approach is that we use the four planes of each tetrahedron (tet for short in the following) to clip the Voronoi cells, instead of using the bisecting planes of Voronoi cells to clip tets like previous approaches. This strategy reduces computational complexity drastically. Our approach outperforms the state-of-the-art CPU method up to one order of magnitude.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129994223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Animation Video Resequencing with a Convolutional AutoEncoder 动画视频重排序与卷积自动编码器
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364550
Shangzhan Zhang, Charles C. Morace, T. Le, Chih-Kuo Yeh, Sheng-Yi Yao, Shih-Syun Lin, Tong-Yee Lee
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364550 animators commonly utilize a set of principles, including natural movement, as a model and will incorporate other principles for dramatic effects and emotional impact. There have been many techniques developed to ease the computer animation pipeline, production is still an arduous process that involves the creation of many image sequences depicting the motion of complex characters and their environments. If a single image is out of place, the whole animation may be ruined by an unnatural movement, which is not only visually displeasing but also distracts from the narrative.
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是制作或分发副本不是为了盈利或商业利益,并且副本在第一页上带有本通知和完整的引用。本作品的第三方组件的版权必须得到尊重。对于所有其他用途,请联系所有者/作者。SA '19海报,2019年11月17日至20日,布里斯班,昆士兰州,澳大利亚©2019版权归所有者/作者所有。Acm isbn 978-1-4503-6943-5/19/11。https://doi.org/10.1145/3355056.3364550动画师通常利用一套原则,包括自然运动,作为模型,并将其他原则纳入戏剧效果和情感影响。已经开发了许多技术来简化计算机动画管道,制作仍然是一个艰巨的过程,包括创建许多图像序列来描绘复杂角色及其环境的运动。如果一个图像出现错位,整个动画可能会被一个不自然的动作所破坏,这不仅在视觉上令人不快,而且会分散叙事的注意力。
{"title":"Animation Video Resequencing with a Convolutional AutoEncoder","authors":"Shangzhan Zhang, Charles C. Morace, T. Le, Chih-Kuo Yeh, Sheng-Yi Yao, Shih-Syun Lin, Tong-Yee Lee","doi":"10.1145/3355056.3364550","DOIUrl":"https://doi.org/10.1145/3355056.3364550","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364550 animators commonly utilize a set of principles, including natural movement, as a model and will incorporate other principles for dramatic effects and emotional impact. There have been many techniques developed to ease the computer animation pipeline, production is still an arduous process that involves the creation of many image sequences depicting the motion of complex characters and their environments. If a single image is out of place, the whole animation may be ruined by an unnatural movement, which is not only visually displeasing but also distracts from the narrative.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124196596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sense of non-presence:Visualization of invisible presence 不存在感:无形存在的形象化
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364591
Takuya Mikami, Min Xu, Kaori Yoshida, Kousuke Matsunaga, Jun Fujiki
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364591 visualization devices have been developed[Piper et al. 2002], but what is unique to this instrument is the use of apparent movement, a phenomenon of human perception in which we perceive that certain objects are in motion when in fact they are not moving. The apparent movement – which makes viewers feel as if stimulus objects in a fixed position are moving by making them appear or disappear instantaneously – serves as a basic principle in animation. We use the apparent movement created by controlling particles blown up in the air to get viewers to recognize specific movement sequences.
允许制作部分或全部作品的数字或硬拷贝供个人或课堂使用,但不收取任何费用,前提是制作或分发副本不是为了盈利或商业利益,并且副本在第一页上带有本通知和完整的引用。本作品的第三方组件的版权必须得到尊重。对于所有其他用途,请联系所有者/作者。SA '19海报,2019年11月17日至20日,布里斯班,昆士兰州,澳大利亚©2019版权归所有者/作者所有。Acm isbn 978-1-4503-6943-5/19/11。https://doi.org/10.1145/3355056.3364591可视化设备已经开发出来[Piper et al. 2002],但这种仪器的独特之处在于使用了明显的运动,这是一种人类感知的现象,我们认为某些物体在运动,而实际上它们没有运动。视动是动画的一个基本原理。视动是指让处于固定位置的刺激对象瞬间出现或消失,从而让观众感觉它们在移动。我们通过控制在空气中爆炸的粒子产生的明显运动来让观众识别特定的运动序列。
{"title":"Sense of non-presence:Visualization of invisible presence","authors":"Takuya Mikami, Min Xu, Kaori Yoshida, Kousuke Matsunaga, Jun Fujiki","doi":"10.1145/3355056.3364591","DOIUrl":"https://doi.org/10.1145/3355056.3364591","url":null,"abstract":"Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). SA '19 Posters, November 17-20, 2019, Brisbane, QLD, Australia © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6943-5/19/11. https://doi.org/10.1145/3355056.3364591 visualization devices have been developed[Piper et al. 2002], but what is unique to this instrument is the use of apparent movement, a phenomenon of human perception in which we perceive that certain objects are in motion when in fact they are not moving. The apparent movement – which makes viewers feel as if stimulus objects in a fixed position are moving by making them appear or disappear instantaneously – serves as a basic principle in animation. We use the apparent movement created by controlling particles blown up in the air to get viewers to recognize specific movement sequences.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124802140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network 基于长短期姿态预测网络的乒乓球实时预测系统
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364555
Erwin Wu, Florian Perteneder, H. Koike
Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.
人类预测运动和轨迹的能力是许多运动中最重要的能力之一。随着深度学习和计算机视觉的发展,在实时计算中做同样的事情成为可能。本文提出了一种基于长短期姿态预测网络的乒乓球实时预测系统。我们的系统甚至可以根据运动员之前和现在的动作,在乒乓球被击中之前预测发球的轨迹,这只需要一个RGB相机就可以捕捉到。该系统既可用于训练初学者的预测技巧,也可用于训练练习者的隐蔽发球。
{"title":"Real-time Table Tennis Forecasting System based on Long Short-term Pose Prediction Network","authors":"Erwin Wu, Florian Perteneder, H. Koike","doi":"10.1145/3355056.3364555","DOIUrl":"https://doi.org/10.1145/3355056.3364555","url":null,"abstract":"Humans’ ability to forecast motions and trajectories, are one of the most important abilities in many sports. With the development of deep learning and computer vision, it is becoming possible to do the same thing with real-time computing. In this paper, we present a real-time table tennis forecasting system using a long short-term pose prediction network. Our system can predict the trajectory of a serve before the pingpong ball is even hit based on the previous and present motions of a player, which is captured only using a single RGB camera. The system can be either used for training beginner’s prediction skill, or used for practitioners to train a conceal serve.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Gamification in a Physical Rehabilitation Setting: Developing a Proprioceptive Training Exercise for a Wrist Robot 游戏化在物理康复设置:开发腕部机器人本体感觉训练练习
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364572
C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak
Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.
本体感觉或身体意识是帮助神经控制运动的基本感觉。本体感觉障碍常见于中风和帕金森氏症等神经系统疾病患者。众所周知,这种损伤会影响患者的生活质量。机器人辅助本体感觉训练已被提出和测试,以提高感觉运动性能。然而,这种机器人辅助的练习与许多物理康复练习类似,需要患者进行特定任务和重复的运动。这种重复性练习的单调性质会导致患者动机降低,从而影响治疗依从性和治疗效果。运动的游戏化可以使身体康复更有吸引力和回报。在这项工作中,我们讨论了我们正在努力开发一个游戏,可以伴随机器人辅助手腕本体感觉训练练习。
{"title":"Gamification in a Physical Rehabilitation Setting: Developing a Proprioceptive Training Exercise for a Wrist Robot","authors":"C. Curry, Naveen Elangovan, Reuben Gardos Reid, Jiapeng Xu, J. Konczak","doi":"10.1145/3355056.3364572","DOIUrl":"https://doi.org/10.1145/3355056.3364572","url":null,"abstract":"Proprioception or body awareness is an essential sense that aids in the neural control of movement. Proprioceptive impairments are commonly found in people with neurological conditions such as stroke and Parkinson’s disease. Such impairments are known to impact the patient’s quality of life. Robot-aided proprioceptive training has been proposed and tested to improve sensorimotor performance. However, such robot-aided exercises are implemented similar to many physical rehabilitation exercises, requiring task-specific and repetitive movements from patients. Monotonous nature of such repetitive exercises can result in reduced patient motivation, thereby, impacting treatment adherence and therapy gains. Gamification of exercises can make physical rehabilitation more engaging and rewarding. In this work, we discuss our ongoing efforts to develop a game that can accompany a robot-aided wrist proprioceptive training exercise.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Method for estimating display lag in the Oculus Rift S and CV1 估计Oculus Rift S和CV1显示延迟的方法
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364590
Jason Feng, Juno Kim, Wilson Luu, S. Palmisano
We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.
我们验证了一种测量现代头戴式显示器(hmd)显示延迟的光学方法。该方法使用高速数码相机跟踪呈现在Oculus Rift CV1和S模型显示面板上的地标。我们使用Nvidia GeForce RTX 2080图形适配器,发现Oculus CV1和S的最小估计基线延迟非常短(约2毫秒)。滞后的可变性很低,即使滞后被系统性地夸大了。晕屏是由小的基线滞后引起的,随着滞后的增大而加剧。这些发现表明,Oculus Rift CV1和S能够在极低的基线显示延迟下进行头部角度旋转,这似乎解释了它们报告的低晕屏水平。
{"title":"Method for estimating display lag in the Oculus Rift S and CV1","authors":"Jason Feng, Juno Kim, Wilson Luu, S. Palmisano","doi":"10.1145/3355056.3364590","DOIUrl":"https://doi.org/10.1145/3355056.3364590","url":null,"abstract":"We validated an optical method for measuring the display lag of modern head-mounted displays (HMDs). The method used a high-speed digital camera to track landmarks rendered on a display panel of the Oculus Rift CV1 and S models. We used an Nvidia GeForce RTX 2080 graphics adapter and found that the minimum estimated baseline latency of both the Oculus CV1 and S was extremely short (∼2 ms). Variability in lag was low, even when the lag was systematically inflated. Cybersickness was induced with the small baseline lag and increased as this lag was inflated. These findings indicate the Oculus Rift CV1 and S are capable of extremely low baseline display lag latencies for angular head rotation, which appears to account for their low levels of reported cybersickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132644673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Parallel Adaptive Frameless Rendering with NVIDIA OptiX 并行自适应无帧渲染与NVIDIA OptiX
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364569
Chung-Che Hsiao, Benjamin Watson
In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.
在虚拟现实(VR)或增强现实(AR)系统中,延迟是导致模拟器眩晕的最重要原因之一。在传统的渲染器中,延迟很难限制,它用一系列帧严格地采样时间,每个帧代表一个时间上的单个时刻,用固定的延迟量来描述。先前的研究提出了自适应无帧渲染(AFR),它可以灵活地去除帧来采样空间和时间,减少延迟。然而,他们的原型既不是并行的,也不是交互式的。我们在NVIDIA OptiX中实现AFR,这是一个利用NVIDIA gpu的并发实时光线跟踪API,包括其最新的RTX光线跟踪组件。通过适当的调整,当场景是动态的(产生快速更新,模糊的图像)时,我们的原型优先考虑时间细节,当场景是静态的(产生更新更慢,清晰的图像)时,我们的原型优先考虑空间细节。其结果是并行的,互动的,低延迟的图像,应该减少模拟器病。
{"title":"Parallel Adaptive Frameless Rendering with NVIDIA OptiX","authors":"Chung-Che Hsiao, Benjamin Watson","doi":"10.1145/3355056.3364569","DOIUrl":"https://doi.org/10.1145/3355056.3364569","url":null,"abstract":"In virtual reality (VR) or augmented reality (AR) systems, latency is one of the most important causes of simulator sickness. Latency is difficult to limit in traditional renderers, which sample time rigidly with a series of frames, each representing a single moment in time, depicted with a fixed amount of latency. Previous researchers proposed adaptive frameless rendering (AFR), which removes frames to sample space and time flexibly, and reduce latency. However, their prototype was neither parallel nor interactive. We implement AFR in NVIDIA OptiX, a concurrent, real–time ray tracing API taking advantage of NVIDIA GPUs, including their latest RTX ray tracing components. With proper tuning, our prototype prioritizes temporal detail when scenes are dynamic (producing rapidly updated, blurry imagery), and spatial detail when scenes are static (producing more slowly updated, sharp imagery). The result is parallel, interactive, low-latency imagery that should reduce simulator sickness.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132744456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2019 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1