首页 > 最新文献

ACM SIGGRAPH 2018 Emerging Technologies最新文献

英文 中文
Spherical full-parallax light-field display using ball of fly-eye mirror 球眼镜球形全视差光场显示
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214917
H. Yano, T. Yendo
We present an optical system design for a 3D display that is spherical, full-parallax, and occlusion-capable with a wide viewing zone and no head tracking. The proposed system provides a new approach for the 3D display and thereby addresses limitations of the conventional light-field display structure. Specifically, a spherical full-parallax light-field display is difficult to achieve because it is challenging to curve the conventional structure of the light-field displays. The key elements of the system are a specially designed ball mirror and a high-speed projector. The ball mirror uniaxially rotates and reflects rays from the projector to various angles. The intensities of these rays are controlled by the projector. Rays from a virtual object inside the ball mirror are reconstructed, and the system acts as a light-field display based on the time-division multiplexing method. We implemented this ball mirror by 3D printing and metal plating. The prototype successfully displays a 3D image and the system feasibility is confirmed. Our system is thus suitable for displaying 3D images to many viewers simultaneously and it can be effectively employed as in art or advertisement installation.
我们提出了一种用于3D显示器的光学系统设计,该系统具有球形,全视差和闭塞能力,具有宽观看区和无头部跟踪。该系统为三维显示提供了一种新的方法,从而解决了传统光场显示结构的局限性。具体来说,球形全视差光场显示器是难以实现的,因为它是具有挑战性的弯曲的传统结构的光场显示器。该系统的关键部件是一个特殊设计的球镜和一个高速投影仪。球镜单轴旋转,将来自投影仪的光线反射到不同的角度。这些射线的强度由放映机控制。对球镜内虚拟物体的光线进行重构,利用时分复用技术实现光场显示。我们通过3D打印和金属电镀实现了这个球镜。样机成功显示出三维图像,验证了系统的可行性。因此,我们的系统适合同时向许多观众显示3D图像,并且可以有效地用于艺术或广告装置。
{"title":"Spherical full-parallax light-field display using ball of fly-eye mirror","authors":"H. Yano, T. Yendo","doi":"10.1145/3214907.3214917","DOIUrl":"https://doi.org/10.1145/3214907.3214917","url":null,"abstract":"We present an optical system design for a 3D display that is spherical, full-parallax, and occlusion-capable with a wide viewing zone and no head tracking. The proposed system provides a new approach for the 3D display and thereby addresses limitations of the conventional light-field display structure. Specifically, a spherical full-parallax light-field display is difficult to achieve because it is challenging to curve the conventional structure of the light-field displays. The key elements of the system are a specially designed ball mirror and a high-speed projector. The ball mirror uniaxially rotates and reflects rays from the projector to various angles. The intensities of these rays are controlled by the projector. Rays from a virtual object inside the ball mirror are reconstructed, and the system acts as a light-field display based on the time-division multiplexing method. We implemented this ball mirror by 3D printing and metal plating. The prototype successfully displays a 3D image and the system feasibility is confirmed. Our system is thus suitable for displaying 3D images to many viewers simultaneously and it can be effectively employed as in art or advertisement installation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121372725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Autofocals: gaze-contingent eyeglasses for presbyopes 自动对焦:适合老花眼的可调节凝视的眼镜
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214918
Nitish Padmanaban, Robert Konrad, Gordon Wetzstein
Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new presbyopia correction, dubbed Autofocals, which externally mimics the natural accommodation response by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. In our testing, wearers generally reported that the Autofocals compare favorably with their own current corrective eyewear.
老花眼是由于晶状体变硬而导致的适应性丧失,全世界有近20%的人患有这种疾病。传统形式的老花眼矫正使用固定焦点元件,固有地权衡视野或立体视觉的更大范围内的距离,佩戴者可以清楚地看到。然而,这些都不能像年轻时那样自然地重新集中注意力。在这项工作中,我们建立了一种新的老花眼矫正,称为自动聚焦,它通过结合眼动仪和深度传感器的数据,从外部模仿自然调节反应,然后自动驱动可调焦镜头。在我们的测试中,佩戴者普遍表示,与他们目前使用的矫正眼镜相比,自动对焦眼镜更有优势。
{"title":"Autofocals: gaze-contingent eyeglasses for presbyopes","authors":"Nitish Padmanaban, Robert Konrad, Gordon Wetzstein","doi":"10.1145/3214907.3214918","DOIUrl":"https://doi.org/10.1145/3214907.3214918","url":null,"abstract":"Presbyopia, the loss of accommodation due to the stiffening of the crystalline lens, affects nearly 20% of the population worldwide. Traditional forms of presbyopia correction use fixed focal elements that inherently trade off field of view or stereo vision for a greater range of distances at which the wearer can see clearly. However, none of these offer the same natural refocusing enjoyed in youth. In this work, we built a new presbyopia correction, dubbed Autofocals, which externally mimics the natural accommodation response by combining data from eye trackers and a depth sensor, and then automatically drives focus-tunable lenses. In our testing, wearers generally reported that the Autofocals compare favorably with their own current corrective eyewear.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126492690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Wind-blaster: a wearable propeller-based prototype that provides ungrounded force-feedback 风爆器:一种基于螺旋桨的可穿戴原型,提供非接地力反馈
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214915
Seungwoo Je, Hyelip Lee, Myung Jin Kim, Andrea Bianchi
Ungrounded haptic force-feedback is a crucial element for applications that aim to immerse users in virtual environments where also mobility is an important component of the experience, like for example Virtual Reality games. In this paper, we present a novel wearable interface that generates a force-feedback by spinning two drone-propellers mounted on a wrist. The device is interfaced with a game running in Unity, and it is capable to render different haptic stimuli mapped to four weapons. A simple evaluation with users demonstrates the feasibility of the proposed approach.
对于那些旨在让用户沉浸在虚拟环境中的应用来说,非接地触觉力反馈是一个至关重要的元素,在虚拟环境中,移动性也是体验的重要组成部分,比如虚拟现实游戏。在本文中,我们提出了一种新颖的可穿戴界面,通过旋转安装在手腕上的两个无人机螺旋桨来产生力反馈。该设备与在Unity中运行的游戏接口,它能够呈现映射到四种武器的不同触觉刺激。一个简单的用户评估证明了所提出方法的可行性。
{"title":"Wind-blaster: a wearable propeller-based prototype that provides ungrounded force-feedback","authors":"Seungwoo Je, Hyelip Lee, Myung Jin Kim, Andrea Bianchi","doi":"10.1145/3214907.3214915","DOIUrl":"https://doi.org/10.1145/3214907.3214915","url":null,"abstract":"Ungrounded haptic force-feedback is a crucial element for applications that aim to immerse users in virtual environments where also mobility is an important component of the experience, like for example Virtual Reality games. In this paper, we present a novel wearable interface that generates a force-feedback by spinning two drone-propellers mounted on a wrist. The device is interfaced with a game running in Unity, and it is capable to render different haptic stimuli mapped to four weapons. A simple evaluation with users demonstrates the feasibility of the proposed approach.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126356065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Taste controller: galvanic chin stimulation enhances, inhibits, and creates tastes 味觉控制:下巴电刺激增强、抑制和创造味觉
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214916
K. Aoyama, K. Sakurai, Akinobu Morishima, T. Maeda, H. Ando
Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.
电舌刺激(GTS)是一种利用电刺激改变和诱导味觉的技术。从以往的研究可知,阴极电流刺激可引起两种效应。第一种是味觉抑制,使电解材料在刺激过程中引起的味觉减弱。第二种是味觉增强,在刺激结束后不久,味觉会变得更强。这些效果更有可能影响模拟味觉的能力,最终可以自由地控制味觉的强度。味道模拟已经在各种应用中得到了考虑,例如在虚拟现实、饮食努力和其他应用中。然而,传统的GTS存在一些问题。例如,味觉增强的持续时间太短,无法用于节食,而且它需要在口腔中附着电极。此外,传统的GTS不能在喉咙处诱导味觉,而是在口腔。因此,本研究和我们的相关示范介绍了一些处理和解决这些问题的方法。我们的方法认识到,味道会自动改变,而且影响会持续很长一段时间。
{"title":"Taste controller: galvanic chin stimulation enhances, inhibits, and creates tastes","authors":"K. Aoyama, K. Sakurai, Akinobu Morishima, T. Maeda, H. Ando","doi":"10.1145/3214907.3214916","DOIUrl":"https://doi.org/10.1145/3214907.3214916","url":null,"abstract":"Galvanic tongue stimulation (GTS) is a technology used to change and induce taste sensation with electrical stimulation. It is known from previous studies that cathodal current stimulation induces two types of effects. The first is the taste suppression that renders the taste induced by electrolytic materials weaker during the stimulation. The second is taste enhancement that makes taste stronger shortly after ending the stimulation. These effects stand a better possibility to affect the ability to emulate taste, which can ultimately control the strength of taste sensation with freedom. Taste emulation has been considered in various applications, such as in virtual reality, in diet efforts, and in other applications. However, conventional GTS is associated with some problems. For example, the duration of taste enhancement is too short for use in diet efforts, and it necessitates the attachment of electrodes in the mouth. Moreover, conventional GTS cannot induce taste at the throat but at the mouth instead. Thus, this study and our associated demonstration introduces some approaches to address and solve these problems. Our approaches realize that taste changes voluntarily and the effects persist for lengthy periods of time.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123028898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Hapcube: a tactile actuator providing tangential and normal pseudo-force feedback on a fingertip Hapcube:一种触觉执行器,在指尖上提供切向和法向的伪力反馈
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214922
Hwan Kim, Hyeon-Beom Yi, Richard Chulwoo Park, Woohun Lee
We developed a tactile actuator named HapCube that provides tangential and normal pseudo-force feedback on user's fingertip. The tangential feedback is generated by synthesizing two orthogonal asymmetric vibrations, and it simulates frictional force in any desired tangential directions. The normal feedback simulates tactile sensations when pressing various types of button. In addition, by combining the two feedbacks, it can produce frictional force and surface texture simultaneously.
我们开发了一种名为HapCube的触觉执行器,它可以在用户的指尖上提供切向和法向的伪力反馈。切向反馈是由两个正交的不对称振动合成而成的,它可以模拟任意切向上的摩擦力。正常反馈模拟了按下各种按钮时的触觉。此外,通过结合两种反馈,可以同时产生摩擦力和表面纹理。
{"title":"Hapcube: a tactile actuator providing tangential and normal pseudo-force feedback on a fingertip","authors":"Hwan Kim, Hyeon-Beom Yi, Richard Chulwoo Park, Woohun Lee","doi":"10.1145/3214907.3214922","DOIUrl":"https://doi.org/10.1145/3214907.3214922","url":null,"abstract":"We developed a tactile actuator named HapCube that provides tangential and normal pseudo-force feedback on user's fingertip. The tangential feedback is generated by synthesizing two orthogonal asymmetric vibrations, and it simulates frictional force in any desired tangential directions. The normal feedback simulates tactile sensations when pressing various types of button. In addition, by combining the two feedbacks, it can produce frictional force and surface texture simultaneously.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126871485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Make your own retinal projector: retinal near-eye displays via metamaterials 制作你自己的视网膜投影仪:通过超材料制作视网膜近眼显示器
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214910
Yoichi Ochiai, Kazuki Otao, Yuta Itoh, Shouki Imai, Kazuki Takazawa, Hiroyuki Osone, Atsushi Mori, Ippei Suzuki
Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.
视网膜投影是全天提供沉浸式视觉体验的xR应用程序所必需的。如果能够以低成本实现通用的视网膜投影方法,不仅可以使用更少的能量在视网膜上显示图像,而且还有可能从AR护目镜中去除投影单元本身的重量。以前已经提出了几种视网膜投影方法。基于麦克斯韦光学的视网膜投影在20世纪90年代被提出[Kollin 1993]。激光扫描[Liao and Tsai 2009],使用空间光调制器(SLM)或全息光学元件的激光投影也进行了探索[Jang et . 2017]。在商业领域,可提供26度视角的QD Laser1。然而,由于眼球的晶状体和虹膜在视网膜的前面,这是人类眼球的局限性,视网膜投影的提议通常充满了狭窄的视角和小眼箱的问题。由于这些问题,由于光学原理图设计困难,视网膜投影显示器仍然是一种罕见的商品。
{"title":"Make your own retinal projector: retinal near-eye displays via metamaterials","authors":"Yoichi Ochiai, Kazuki Otao, Yuta Itoh, Shouki Imai, Kazuki Takazawa, Hiroyuki Osone, Atsushi Mori, Ippei Suzuki","doi":"10.1145/3214907.3214910","DOIUrl":"https://doi.org/10.1145/3214907.3214910","url":null,"abstract":"Retinal projection is required for xR applications that can deliver immersive visual experience throughout the day. If general-purpose retinal projection methods can be realized at a low cost, not only could the image be displayed on the retina using less energy, but there is also a possibility of cutting off the weight of projection unit itself from the AR goggles. Several retinal projection methods have been previously proposed. Maxwellian optics based retinal projection was proposed in 1990s [Kollin 1993]. Laser scanning [Liao and Tsai 2009], laser projection using spatial light modulator (SLM) or holographic optical elements were also explored [Jang et al. 2017]. In the commercial field, QD Laser1 with a viewing angle of 26 degrees is available. However, as the lenses and iris of an eyeball are in front of the retina, which is a limitation of a human eyeball, the proposal of retinal projection is generally fraught with narrow viewing angles and small eyebox problems. Due to these problems, retinal projection displays are still a rare commodity because of their difficulty in optical schematics design.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121994663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Coglobe: a co-located multi-person FTVR experience Coglobe:同一地点的多人FTVR体验
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214914
Qian Zhou, Georg Hagemann, S. Fels, D. Fafard, A. J. Wagemakers, Chris Chamberlain, I. Stavness
Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.
鱼缸虚拟现实技术(FTVR)通过头部跟踪来渲染一个人的视角,为一个人创造了一个引人注目的3D错觉。然而,通常情况下,其他参与者无法分享体验,因为当他们看到FTVR显示器时,他们看到的是一个奇怪的扭曲图像,这使得他们很难一起工作和玩耍。为了克服这个问题,我们创建了CoGlobe:一个大型球形FTVR显示器,可用于多个用户。通过使用CoGlobe, Siggraph与会者将体验到FTVR的最新进展,该技术支持多人在共享空间中通过两种不同的多人游戏和任务一起工作和玩耍。我们制作了一款双人3D乒乓竞技游戏(图1b),让参会者在CoGlobe上体验高度互动的双人游戏。旁观者也可以使用混合现实技术和可追踪的移动智能手机观看。使用智能手机作为注册到同一个虚拟世界的第二个屏幕,也可以让多人一起互动。我们还制作了一个多人合作的3D无人机游戏(图1c)来说明FTVR中的合作。与会者还将看到协同定位的3D FTVR在复杂的3D心理旋转(图1d)和路径追踪任务(图1a)上的有效性。CoGlobe克服了头戴式VR有限的情况感知,同时保留了合作3D交互的好处,因此是下一波3D显示的一个令人兴奋的方向,为Siggraph与会者体验工作和娱乐。
{"title":"Coglobe: a co-located multi-person FTVR experience","authors":"Qian Zhou, Georg Hagemann, S. Fels, D. Fafard, A. J. Wagemakers, Chris Chamberlain, I. Stavness","doi":"10.1145/3214907.3214914","DOIUrl":"https://doi.org/10.1145/3214907.3214914","url":null,"abstract":"Fish Tank Virtual Reality (FTVR) creates a compelling 3D illusion for a single person by rendering to their perspective with head-tracking. However, typically, other participants cannot share in the experience since they see a weirdly distorted image when they look at the FTVR display making it difficult to work and play together. To overcome this problem, we have created CoGlobe: a large spherical FTVR display for multiple users. Using CoGlobe, Siggraph attendees will experience the latest advance of FTVR that supports multiple people co-located in a shared space working and playing together through two different multiplayer games and tasks. We have created a competitive two-person 3D Pong game (Figure 1b) for attendees to experience a highly interactive two-person game looking at the CoGlobe. Onlookers can also watch using a variation of mixed reality with a tracked mobile smartphone. Using a smartphone as a second screen registered to the same virtual world enables multiple people to interact together as well. We have also created a cooperative multi-person 3D drone game (Figure 1c) to illustrate cooperation in FTVR. Attendees will also see how effective co-located 3D FTVR is when cooperating on a complex 3D mental rotation (Figure 1d) and a path-tracing task (Figure 1a). CoGlobe overcomes the limited situation awareness of headset VR, while retaining the benefits of cooperative 3D interaction and thus is an exciting direction for the next wave of 3D displays for work and fun for Siggraph attendees to experience.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125165839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Headlight: egocentric visual augmentation by wearable wide projector 头灯:以自我为中心的视觉增强,可穿戴式宽投影仪
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214926
Shunichi Kasahara
Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce "Headlight", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.
对真实环境的视觉增强不仅可以显示信息,还可以提供对物理世界的新感知。然而,现有的混合现实技术无法提供足够的视角。因此,我们推出了“Headlight”,一种可穿戴投影仪系统,提供广泛的以自我为中心的视觉增强。我们的系统由一个带有鱼眼宽转换镜头的小型激光投影仪、一个耳机和一个姿势跟踪器组成。头灯提供投影角度约。从使用者的角度看,水平105度,垂直55度。在该系统中,基于设备的跟踪信息,通过虚拟摄像机渲染出与物理环境相一致的三维虚拟空间。通过处理透镜畸变的反向校正和投影投影仪的渲染图像,HeadLight在现实世界中执行一致的视觉增强。通过Headlight,我们设想人类无法感知的物理现象将通过视觉增强而被感知。
{"title":"Headlight: egocentric visual augmentation by wearable wide projector","authors":"Shunichi Kasahara","doi":"10.1145/3214907.3214926","DOIUrl":"https://doi.org/10.1145/3214907.3214926","url":null,"abstract":"Visual augmentation to the real environment has potential not only to display information but also to provide a new perception of the physical world. However, the currently available mixed reality technologies could not provide enough angle of view. Thus, we introduce \"Headlight\", a wearable projector system that provides wide egocentric visual augmentation. Our system consists of a small laser projector with a fish-eye wider conversion lens, a headphone and a pose tracker. HeadLight provides projection angle with approx. 105 deg. horizontal and 55 deg. vertical from the point of view of the user. In this system, the three-dimensional virtual space that is consistent with the physical environment is rendered with a virtual camera based on tracking information of the device. By processing inverse correction of the lens distortion and projecting the rendered image from the projector, HeadLight performs consistent visual augmentation in the real world. With Headlight, we envision that physical phenomena that human could not perceive will be perceived through visual augmentation.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128072701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
VPET
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3233760
S. Spielmann, V. Helzle, Andreas Schuster, Jonas Trottnow, Kai Götz, Patricia Rohr
The work on intuitive Virtual Production tools at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) started in a former project on Virtual Production funded by the European Union and are published and constantly updated on the open source software development platform Github. We introduce an intuitive workflow where Augmented Reality, inside-out tracking and real-time color keying can be applied on the fly to extend a real movie set with editable, virtual extensions in a collaborative setup.
{"title":"VPET","authors":"S. Spielmann, V. Helzle, Andreas Schuster, Jonas Trottnow, Kai Götz, Patricia Rohr","doi":"10.1145/3214907.3233760","DOIUrl":"https://doi.org/10.1145/3214907.3233760","url":null,"abstract":"The work on intuitive Virtual Production tools at Filmakademie Baden-Württemberg has focused on an open platform tied to existing film creation pipelines. The Virtual Production Editing Tools (VPET) started in a former project on Virtual Production funded by the European Union and are published and constantly updated on the open source software development platform Github. We introduce an intuitive workflow where Augmented Reality, inside-out tracking and real-time color keying can be applied on the fly to extend a real movie set with editable, virtual extensions in a collaborative setup.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116310488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
CHICAP
Pub Date : 2018-08-12 DOI: 10.1145/3214907.3214924
Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You
In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.
{"title":"CHICAP","authors":"Yong-Ho Lee, Mincheol Kim, Hwang-Youn Kim, Dongmyoung Lee, Bum-Jae You","doi":"10.1145/3214907.3214924","DOIUrl":"https://doi.org/10.1145/3214907.3214924","url":null,"abstract":"In the research, we propose a cost-effective 3-finger exoskeleton hand motion-capturing device and a physics engine-based hand interaction module for immersive experience in manipulation of virtual objects. The developed device provides 12 DOFs data of finger motion by a unique bevel-gear structure as well as the use of six 3D magnetic sensors. It shows a small error in relative distance between two fingertips less than 2 mm and allows the user to reproduce precise hand motion while processing the complex joint data in real-time. We synchronize hand motion with a physics engine-based interaction framework that includes a grasp interpreter and multi-modal feedback operation in virtual reality to minimize penetration of a hand into an object. The system enables feasibility of object manipulation as far as the needs go in various tasks in virtual environment.","PeriodicalId":370990,"journal":{"name":"ACM SIGGRAPH 2018 Emerging Technologies","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
ACM SIGGRAPH 2018 Emerging Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1